Categories
Articles Interviews

An interview with Abhinav Asthana, Founder and CEO – Postman

API (Application Programming Interface), the programming logic to access web-based applications, works as the foundation for any web or mobile application. All the applications are powered by specialized APIs for specific functions.

We recently interviewed Abhinav Asthana, Founder and CEO – Postman, to learn about API development, how Postman helps developers in entire lifecycle of designing, building, testing, and monitoring of APIs, and more.

1. Young guys founding a startup! Seems like a college-life story. Please tell us how did it all start? What inspired you to start the company?

During my time at Yahoo, I met Ankit Sobti, the Co-founder and CTO of Postman. We were building a front-end architecture of an app. At that time testing APIs was a pain. There were a lot of communication issues with different teams. We thought of making the problem simpler but being first-time engineers we didn’t work on the problem then.

Soon, I went ahead to build TeliportMe, where again I was building APIs and found that the problems existed across the board. Ankit went to ISB and after that went to Mumbai for work. It was during my spare time at TeliportMe, I created the prototype of the first Postman app. At that time, I had built the product primarily for myself, and finding that it worked well, I decided to put it up on the Chrome webstore as an open-source REST client. The app gained popularity in a very short time. This gave the founders the encouragement to take this up full-time.

2. From a free app in 2012 to becoming one of the most popular apps on Chrome store, how has Postman changed over the years?

The free app gained a lot of following and soon after that investor interest. In 2014, we started the company to support the free app and develop more extensive features for a SaaS product, now known as Postman Pro (introduced in 2016), and Postman Enterprise (introduced in 2018). We have grown from a 3-member team to a 70-member team in 2 different locations (SF and Bangalore). Postman has come a long way, in terms of product, its users and team.

3. Tell us about the premium versions of Postman-Postman Pro and Postman Enterprise.

Postman Pro and Enterprise are for professional collaboration and power usage. Most of the functionalities are available for free but with limitations. Pro and Enterprise helps teams solve that problem based on their own use case.

With Postman Pro, a user can create beautiful documentations, publish their collections, monitor uptime, responsiveness and correctness.

Postman Enterprise was launched to meet the exclusive needs of large enterprises and offers features like single sign-on, dedicated support, access control, audit logs.

4. Postman is a complete API development environment. How?

Postman is designed for the entire lifecycle of designing, building, testing, and monitoring your APIs. It’s packed with all the features to support every stage of API development, and benefits developers working individually, collaborating in small teams or industry leading enterprises. Postman is an integral part of API development in the best technology teams in the world, including Atlassian, VMware, PayPal, and Docusign.

5. You recently released Postman 6.2. What’s new in the latest version of Postman app?

We are very excited about this release as we’ve made Postman Teams available to free users, to help make API development and collaboration even easier. Previously, Postman Teams were only available with Postman’s paid plans; now, all users can invite their colleagues to a Postman team. Postman 6.2 also strengthens collaboration for all teams with the addition of sessions & session variables — which provide additional security and flexibility when collaborating on shared collections.

6. What is Postman API Network? How does it help API developers?

Postman API network is the most authentic collection of APIs currently published. In simple words, it makes your API discoverable. Thousands of developers are already using Postman to share private and public collections. With the help of our beautiful documentation, Run in Postman button and option to publish to the API network, they can create a brand for their team or the collection.

We have about 200 collections across 10 categories that can be imported into your Postman instance just with a click of a button so it’s very easy consumption. We also recently launched a self-serve feature where developers can submit their APIs directly to the API Network so it’s very easy onboarding to the network.

7. Is Postman integrated with any other development tools?

We have recently made teams free and we are receiving a lot of positive feedback from the community for this. This gives teams a great opportunity to start collaborating on the free plan and upgrade to Pro when they reach the need. The Pro plan is for teams and companies who require advanced integrations, Pro API access and higher rate limits. The Enterprise plan is for companies who want SSO, strong auditing features and dedicated support.

8. What is Newman?

Newman is Postman’s open-source tool to run and test collections directly from the command line. It allows you to run and test a Postman Collection directly from the command line. It is built with extensibility in mind so that you can easily integrate it with your continuous integration servers and build systems.

9. How does Postman take care of security of data, software, and infrastructure?

We take security with utmost seriousness. Our infrastructure is hosted mostly on AWS and we make thorough use of all the security measures that AWS comes with, including DDoS mitigation. Our security operations team continuously test and monitor our apps, platform and tools for security holes.  We have set up detailed processes for handling security breaches in case they are found or reported by someone outside the organisation. We also try to be as transparent as possible and our latest service status is available on status.getpostman.com.

10. How will you define your company in 3-6 words?

A complete API Development Environment

11. Do you remember your first paying customer and revenue expectations then?

Postman was launched as a fun project and there was no immediate goal of monetizing it as it was solving a very simple problem at that time. We were just focused on making the product as effective as possible. But the community was so excited to have this product, that a team from US sent us $500 to keep up the good work! This was the first time we made any money through Postman.

12. Wrapping up, any new product or updates coming up this year?

We have plans for many features to enhance the developer’s API development experience. Key initiatives include support for multi-team collaboration, GraphQL, and OpenAPI. We keep a trello highlighting upcoming efforts, (https://trello.com/b/4N7PnHAz/postman-roadmap-for-developers), as well as a list of issues (github.com/postmanlabs/postman-app-support/issues ), and feature requests (https://github.com/postmanlabs/postman-app-support/issues?q=is%3Aopen+is%3Aissue+label%3AFeature) on Github.

Categories
Articles Interviews Wordpress

Smart Updates – What makes Plesk the preferred choice of WordPress developers?

In the last years we saw WordPress continuously increasing market share and becoming the CMS standard for building websites. Today, more than 31% of all websites worldwide are built with WordPress and this number is still growing.

However, maintaining WordPress sites, keeping them secure and up to date is a real challenge – especially if you run multiple sites! We know the numbers: More than 60,000 websites are hacked every day, so it’s absolutely critical to secure WordPress and the underlying infrastructure properly and monitor its status to avoid downtime and prevent sites from getting blacklisted.

Plesk WordPress Toolkit takes away the burden of WordPress management and significantly increases website speed, performance, security and a web pro’s productivity!

Recently, we interacted with Jan Loeffler, CTO of Plesk, to discuss about WordPress, WP Toolkit, Plesk’s relationship with Automattic and more.

1. Plesk – one of the leading names in the WebOps hosting platforms that is running on more than 380,000 servers. Give us a quick overview of Plesk and its journey so far.

Plesk is a website management platform that powers 11 million websites and 19 million mailboxes for customers in 230 countries. It was founded back in 1999, when Rackspace became Plesk’s first customer and now, 19 years later, Plesk is used and offered by thousands of Hosting companies and Cloud Service Providers worldwide – incl. top players like GoDaddy, 1&1, Media Temple, AWS, Google, Microsoft Azure and many more.

The core mission of Plesk is to simplify the lives of web professionals. Web Professionals are web developers, web designers, system administrators, digital agencies and service providers that mainly create or manage websites and web applications for business. We simplify their lives by automating and securing web operations (WebOps) to free up time and allow focus on their core business which is not infrastructure management.

2. Initially, Plesk was largely used as a software for server administration. But now, we see it expanding to include WordPress and web applications too. Please enlighten.

Having our core mission in mind, we continuously analyze how web pros work, what challenges they have to overcome, and where they lose precious time that cannot be billed. Our goal is to help them increase productivity through automation and intuitive ways to address the most common pain points. To complete our mission, we’re constantly adding new tools and features to Plesk to stay ahead of the game and provide web pros with the latest and greatest technologies.

In the early days we focused on automating server management only. That’s still part of our DNA but not enough to address what the market demands today. We cannot ignore millions of end customers who are using WordPress to build successful websites, web apps and online businesses. WordPress still is the most widely used and fastest growing solution to build a website.

3. Plesk is steadily entering the world of WordPress. How’s the experience? Throw some light on working with Automattic and WordPress Developer community.

At Plesk we have a passion for WordPress! A lot of Pleskians have been using WordPress for years, plesk.com is built with WordPress, too. But besides using and understanding WordPress we also contribute back and sponsor many WordPress events like WordCamps and Meetups to support and engage with the community. If you want to be best in class and become a trusted advisor you have to know WordPress by heart and be very well connected with the WordPress developer community.

When Matt Mullenweg, founder of WordPress and CEO of Automattic, came to us 2 years ago, we immediately understood that this was the beginning of something great. We’re closely working with the Automattic team to make the whole WordPress experience better and more secure. As a result, I’m proud to say that the second largest number of WordPress sites is already managed by Plesk WordPress Toolkit.

4. Plesk’s WordPress Toolkit simplifies WP installation and management. What was the idea behind launching WordPress Toolkit?

In the last years we saw WordPress continuously increasing market share and becoming the CMS standard for building websites. Today, more than 31% of all websites worldwide are built with WordPress and this number is still growing. It became pretty obvious to us that just installing WordPress for our users is not good enough. The community is making it very clear that the installation of WordPress is easy and not an issue at all.

However, maintaining WordPress sites, keeping them secure and up to date is a real challenge – especially if you run multiple sites! We know the numbers: More than 60.000 websites are hacked every day, so it’s absolutely critical to secure WordPress and the underlying infrastructure properly and monitor its status to avoid downtime and prevent sites from getting blacklisted. For example, let Plesk install patches & updates immediately before the site can be harmed and block attacks already at web server level without the need of deep technical knowledge.

Plesk WordPress Toolkit takes away the burden of WordPress management and significantly increases website speed, performance, security and a web pro’s productivity!

5. That’s interesting. Can you shed light on some more distinctive features of Plesk WordPress Toolkit? Does it actually take one-click to install WordPress from start to finish?

Yes, of course. Installing WordPress on Plesk is just one click and done within 20 seconds. That’s easy as pie. The real benefit is a ready-to-code/ready-to-design WordPress environment that is automatically hardened for best security.

And if you want to make changes on your site or test different themes or plugins, never do it on a live site! Use the integrated 1-click cloning and data synchronization features instead to clone websites to one or multiple test environments whilst keeping all data in sync. The time savings are tremendous and provide web pros with unique possibilities of testing changes safely or developing in an iterative approach!

Web pros love to make changes or play with different variants – i.e. multiple site designs. When the end customer has chosen his favorite the web pro can even improve it step by step and then move the latest and greatest version to production without any hassle. And, if the end customer suddenly changes his mind? No problem, just switch back to a previous with 1 click.

6. Updates are crucial to WordPress security. We heard something about Smart Updates. How does that help Plesk users with WP updates?

The biggest problem is that many WordPress sites are not using the latest WordPress version and as a result do not utilize its full power. But even worse those outdated WordPress sites are very vulnerable and can be easily attacked by hackers.

We spoke with many web agencies and learned that a lot of them are reluctant to update customer websites automatically because often sites break after an update and cause severe damage. We listened and put our heads together. WordPress updates should no longer be considered a business-critical issue and run without disruptions. We made it happen and proudly present Smart Updates!

Smart Updates uses Artificial Intelligence technologies to keep all WordPress instances – including plugins and themes – up to date without ever breaking the sites. There are two options of Smart Updates available: interactive and automatic. The interactive approach allows web pros to safely check how websites will look like after the installation of an update and checks if there are any issues.

The whole process is fully automated and happens on a dynamically created test environment without any risk of impacting the live websites. You can watch the process live and decide on your own whether it’s safe to push the updated sites to production or if it’s better to reject them.

The automatic process does everything in the background. It checks for updates for all WordPress sites daily, tests the updates and double-checks all web pages. If everything is fine, Smart Updates deploys the changes to production without any human interaction. If the AI discovers a problem the web pro will receive an e-mail notification describing the identified issues in detail. In case of an issue the production sites won’t be touched. It’s up to the web pro to decide whether the changes are intended (i.e. improvements of themes) or not (i.e. bugs, wrong layout).

Smart Updates is able to differentiate intended dynamic changes like video content, twitter feeds or JavaScript animations from unwanted changes like a broken site, unwanted line breaks, etc. The AI system uses a Deep Learning technology that works similar to a human brain.

7. Which option do you recommend more out of the two options for smart updates and why?

Personally, I’m always using the automatic mode of Smart Updates to focus on more important tasks of my business instead of watching WordPress updates. Staying up-to-date and secure are the key principles and core functionalities of Plesk WordPress Toolkit.

8. It would be completely unfair if we do not discuss the security aspect. How does Plesk ensure a site’s security?

Besides keeping sites up to date, WordPress Toolkit continuously applies best security practices. This includes limiting file system permissions to reduce the attack surface, generating strong passwords and obfuscated database prefixes, just to name a few. It would take more than half an hour to go through all the security features and many other improvements for secure WordPress hosting.

Besides automation of most common tasks of a web pro’s workflow, enhanced security features make Plesk WordPress Toolkit the perfect solution for web agencies and experienced WordPress users, but newbies also can fully relax and let WordPress Toolkit do the job.

9. So, are there any new updates or upcoming versions that we should look forward to?

Definitely! We just released WordPress Toolkit CLI feature that allows automating WordPress Management from the command line. This was highly demanded by managed WordPress hosters using Plesk.

Additionally, there will be a solution for web agencies which allows them to publish finished websites back to the client’s webhosting plan – no matter where it is located. This feature was also highly requested by many web agencies we engaged with at WordPress events. It does not only simplify their work but also increases productivity and elevates customer service levels.

For developers and professional users, we will deeply integrate “git” as the most loved version control system directly into the WordPress Toolkit. You can already use git with Plesk today, but we’ll make it super easy to use for WordPress as well.

In fact, we publish micro updates for WordPress Toolkit every 4 weeks and automatically update it on all Plesk servers. Nothing to do, just relax and enjoy the new WordPress experience.

And, if you don’t want to use WordPress – no worries. Simply use Joomla! Toolkit for Joomla! sites or code your app with NodeJS, Ruby or any other technology or web app you prefer.

Suggested reading: ZNet becomes authorized Plesk distributor in India

Categories
Datacenter Interviews

“Demand for scale and speed delivered at the right economics is opening the door for a new breed of Hyperscale Service Provider being sought by the biggest Internet-based businesses.” – Chris Ortbals, QTS.

The rapid adoption of public cloud and the onset of new technologies like the internet of things, neural networks, artificial intelligence, machine learning and mega-scale online retailing are reshaping the data center industry, driving demand for data center capacity and cloud connectivity.

QTS is the leading data center provider that serves current and future needs of both hyperscale and hybrid colocation customers via software-defined data center experience. We recently interviewed Chris Ortbals – Executive Vice President, Products & Marketing, QTS, to know about his take on the changing data center requirements and QTS strategy of redefining the data center.

1. Please share an overview of QTS’ journey from inception till date with DHN readers. How has it transformed from being a single data center to becoming one of the leading national data center providers?

QTS is the creation of Chad Williams, a business and real-estate entrepreneur who had a strong vision of what a data center can and should be. Williams foresaw increasing IT complexity and demand for capacity and recognized the opportunity for large, highly secure, multi-tenant data centers, with ample space, power and connectivity.

Chris Ortbals Executive Vice President, Products & Marketing QTS

In 2005, QTS was formally established with the purchase of a 370,000 square foot Atlanta-Suwanee mega data center. Williams focused on building an integrated data center platform delivering a broad range of IT infrastructure services ranging from wholesale to colocation, to hybrid and multi-cloud, to hyperscale solutions, and backed by an unwavering commitment to customer support.

Since then, we have grown both organically and through acquisition into one of the world’s leading data center and IT infrastructure services providers, and in 2013 we began trading on the New York Stock Exchange under symbol (NYSE: QTS).

Today, QTS offers a focused portfolio of hybrid colocation, cloud, and hyperscale data center solutions built on the industry’s first software-defined data center and service delivery platform and is a trusted partner to 1,200 customers, including 5 of the worlds’ largest cloud providers. We own, operate or manage more than six million square feet of data center space encompassing 26 data centers, 600+ megawatts of critical power, and access to 500+ networks including connectivity on-ramps to the world’s largest hyperscale companies and cloud providers.

More recently, we have been accelerating momentum as a hyperscale data center provider able to meet unique requirements for scale and speed-to-market delivered at the right economics being sought by the biggest Internet-based businesses.

2. Throw some light on the recent QTS strategy of redefining the data center. What’s the Software-Defined Data Center approach, how do you plan to execute it and how will it help hyperscale and hybrid colocation customers?

We believe that QTS’ Service Delivery Platform (SDP) enables QTS as one of the first true Software Defined Data Centers (SDDC) with 100% completeness of vision. It is an architectural approach that facilitates service delivery across QTS’ entire hybrid colocation and hyperscale solutions portfolio.

Through policy-based automation of the data and facilities infrastructure, QTS customers benefit from the ability to adapt to changes in real-time, to increase utilization, performance, security and quality of services. QTS’ SDP approach involves the digitization, aggregation and analysis of more than 4 billion data points per day across all of QTS’ customer environments.

For hybrid colocation and hyperscale customers, it allows them to integrate data within their own applications and gain deeper insight into the use of their QTS services within their IT environments. It is a highly-automated, cloud-based approach that increases visibility and facilitates operational improvements by enabling customers to access and interact with information related to their data center deployments in a way that is simple, seamless and available on-demand.

3. How do you differentiate yourself from your competition?

QTS software-defined service delivery is redefining the data center to enable new levels of automation and innovation that significantly improves our customers’ overall experience. This is backed by a hi-touch, enterprise customer support organization that is focused on serving as a trusted and valued partner.

4. How does it feel to receive the industry leading net promoter score for the third consecutive year?

We were extremely proud to announce that in 2017 we achieved our all-time high NPS score of 72 and the third consecutive year that we have led the industry in customer satisfaction for our data centers across the U.S.

Our customers rated us highly in a range of service areas, including customer service, physical facilities, processes, responsiveness service of onsite staff and our 24-hour Operations Service Center.

As our industry-leading NPS results demonstrate, our customers continue to view QTS as a trusted partner. We are also starting to see the benefits of our service delivery platform that is delivering new levels of innovation in how customers interact with QTS and their infrastructure, contributing to even higher levels of customer satisfaction.

5. QTS last year entered into a strategic alliance with AWS. Can you elaborate what is CloudRamp and how will it simplify cloud migration?

AWS came to us last year telling us that a growing number of their customers were requiring colocation as part of their hybrid IT solution. They viewed QTS as a customer-centric colocation provider with the added advantage of our Service Delivery Platform that allowed us to seamlessly integrate colocation with AWS as a turnkey service available on- demand.

We entered a strategic collaboration with AWS to develop and deliver QTS CloudRampTM – direct connected colocation for AWS customers made available for purchase online via the AWS Marketplace.

By aligning with AWS, we were able to offer an innovative approach to colocation, bridging the gap between traditional solutions and the cloud. The solution is also groundbreaking in that it marked the first time AWS had offered colocation to its customers and signaled the growing demand for hybrid IT solutions. At the same time, it significantly accelerated time-to-value for what previously had been a much slower purchasing and deployment process.

For enterprises with requirements extending beyond CloudRamp, QTS and AWS provide tailored, hybrid IT solutions built upon QTS’ highly secure and reliable colocation infrastructure optimized for AWS.

6. Tell us something about Sacramento-IX. How will the newly deployed Primary Internet Exchange Hub in QTS Sacramento Data Center facilitate interconnection and connectivity solutions?

QTS is strongly committed to building an unrestricted Internet ecosystem and we are focused on expanding carrier neutral connectivity options for customers in all of our data centers.

Interconnection has evolved from a community driven effort in the 90’s to a restrictive, commercial industry dominated by a few large companies. Today there is a movement to get back to the community driven, high integrity ecosystem, and QTS is aligning our Internet exchange strategy as part of this community.

A great example is how the Sacramento Internet Exchange (Sacramento-IX) has deployed its primary Internet Exchange hub within QTS’ Sacramento data center. It is the first internet exchange in Sacramento and is being driven by increased traffic network performance demands in the region. It expands QTS’ Internet ecosystem and simplifies our customers network strategies by providing diverse connectivity options allowing them to manage network traffic in a more cost-effective way.

Once considered the backup and recovery outpost for the Bay area, Sacramento has quickly become a highly interconnected and a geostrategic network hub for northern California. It also solidifies our Sacramento data center as one of the most interconnected data centers in the region and as the primary west coast connectivity gateway for key fiber routes to Denver, Salt Lake City and points east.

7. Hyperscale data centers are growing at an accelerated pace and are expected to soon replace the traditional data centers. Can you tell us some factors/reasons that aid the rise of hyperscale data centers?

The rapid adoption of public cloud, the Internet of things, artificial intelligence, neural networks, machine learning, and mega-scale online retailing are driving unprecedented increases in demand for data center capacity and cloud connectivity.

Hyperscale refers to the rapid deployment of this capacity required for new mega-scale Internet business models. These Hyperscale companies require a data center growth strategy that combines speed, scalability and economics in order to drive down cost of compute and free up the capital needed to feed the needs of their core businesses. Think Google, Uber, Facebook, Amazon, Apple, Microsoft and many more needing huge capacity in a quick timeframe. They are looking for mega-scale computing capacity inside hyperscale data centers that can deliver economies of scale not matched by conventional enterprise data center architectures.

This demand for scale and speed delivered at the right economics is opening the door for a new breed of Hyperscale Service Provider being sought by the biggest Internet-based businesses. These are data centers whose ability to deliver immense capacity must be matched by an ability to provide core requirements for speed, quality, operator excellence, visibility and economics, that leaves out a majority of conventional hosting and service providers who are not interested in or capable of meeting them.

And while the organization may have need for very large geostrategic 20, 40, 60 megawatt deployments, typically they want a provider that can deliver it incrementally to reduce risk and increase agility.

8. Throw some light on your current datacenters and future expansion plans.

Chad Williams’ had the vision for identifying large, undervalued – but infrastructure-rich – buildings (at low cost basis) that could be rapidly transformed into state of the art “mega” data center facilities to serve growing enterprise demand for outsourced IT infrastructure services.

In Chicago, the former Chicago Sun Times printing plant was transformed into a 467,000 square foot mega data center. In Dallas and Richmond, former semi-conductor plants are now state of the art mega data centers encompassing more than 2 million square feet. And in Atlanta, the former Sears distribution center was converted into a 967,000 square foot mega data center that is now home to some of the world’s largest cloud and social media platforms.

However, in some cases, a greenfield approach is the more viable option. In Ashburn Va. the Internet capital of the world, we are building a new 427,000 square foot facility from the ground up that is expected to open later this summer. Expansion plans also call for new data center builds in Phoenix and Hillsboro, Oregon.

9. What is your datacenter sustainability and efficiency strategy?

At QTS, we understand that being a good environmental steward takes much more than just a simple initiative. That’s why we have focused our efforts on developing a company-wide approach – one that utilizes reused and recycled materials, maximizes water conservation and improves energy savings.

Central to this is our commitment to minimizing the data center carbon footprint and utilizing as much renewable fuel as possible by implementing a 3-pronged sustainability approach featuring solutions in containment and power usage effectiveness (PUE) metric products.

This encompasses:

       1. Develop and Recycle Buildings

Part of our data center sustainability strategy is reusing brownfield properties and transforming them into state-of-the-art data centers.

        2. Water Conservation

With a large data center comes a big roof that is capable of harvesting rainwater. We collect millions of gallons of water using a harvesting system on a portion of the roof.

        3. Energy Efficiency

As a data center provider, cooling is a critical part of our job and is approximately 30% of the electricity load at the data center.

QTS is one of the first data center companies to invest in renewable energy specifically for its hybrid colocation and hyperscale customers.

A recent example is a multi-year agreement with Citi to provide 100% renewable power for our 700,00 sq. ft. mega data center in Irving, Texas. The power and renewable energy credits will come from the Flat Top Wind Project, a 200 megawatt utility-scale wind energy facility in central Texas. QTS will purchase 15 MW of 100% renewable power for its Irving data center, with plans for a similar agreement in its Fort Worth data center later this year.

The investment supports QTS’ commitment to lead the industry in providing clean, renewable energy alternatives for QTS hybrid colocation and hyperscale customers that include five of five largest cloud providers and several global social media platforms.

In addition to the new wind power initiative in Texas, QTS’ New Jersey data center features a 14 MW solar farm to offset emissions associated with power consumption at that facility. QTS plans to expand renewable power initiatives in existing and new data centers including those being planned for Phoenix and Hillsboro, OR.

10. What’s in the roadmap for the year 2018?

QTS is now executing on our 2018 strategic growth plan that involves continued innovation with the Service Delivery Platform. It enables a software-defined data center experience for hyperscale and hybrid colocation customers. QTS’ SDP represents a big data approach enabling customers to access and interact with information related to their specific IT environment by aggregating metrics and data from multiple sources into a single operational view.

More importantly, it provides customers the ability to remotely view, manage and optimize resources in real time in a cloud-like experience, which is what customers increasing expect from their service providers. In addition, through a variety of software-defined networking platforms, enterprises can now get direct connectivity to the world’s largest cloud providers with real-time visibility and control over their network infrastructure using QTS’ SDP application interface.

Categories
Hosting Interviews

Interview: “DreamHost will always favor open source solutions to commercial software, and we owe much of our early success to the world of open source.” – Brett Dunst, DreamHost

The onset of a new year has brought with itself new challenges and trends to look towards in the technology landscape in 2018. Nearly every industry is getting acquainted with the changing customer preferences and is thus undergoing a transformation to meet customer expectations. Hosting industry is not an exception.

DreamHost is one of the leading web hosting services provider who leverages open source technology OpenStack for its cloud services software. Open source technologies are the most demanded technologies today.  Open source technologies are the most demanded technologies today.

We got in touch with the Brett Dunst, the VP of Corporate Communications at DreamHost to discuss about the changing web hosting industry’s landscape, the threats, opportunities and challenges lying ahead.

With a proud host of over 1.5 million websites, WordPress blogs and smart applications for developers, designers and small businesses, DreamHost works with the vision to provide users the freedom to choose how their digital content is shared.

In this interview, Brett talks about:

  • web hosting transformation into an ultra-competitive market,
  •  Remixer website builder by DreamHost,
  •  Managed WordPress hosting,
  • Its much talked about computing platform – DreamCompute,

and many other things. Let’s get started:

1. How has DreamHost – which has been in the hosting game for 20 years now (!) – seen the web hosting industry change over those two decades, and how it succeeded in an ultra-competitive market?

We’ve certainly seen a lot happen in the hosting world.  Users have gotten more sophisticated and service offerings have grown to match their needs. There’s also been a tremendous amount of industry consolidation over the last two decades.  We’re proud to be one of the few remaining independent service providers who remain accountable to our users.

By listening to our users and respecting their preferences over the years, we’ve always had a clear view of where the market was headed.  We’ve been leading in the Managed WordPress arena for over a decade, and it remains our largest focus now in 2018.

We’ve built a world-class team of experts across many disciplines who understand the power of WordPress, and we’ve got big plans in store for DreamPress in the near future!

2. Tell us about Remixer. How is this website builder from DreamHost different from other drag-and-drop website builders available in the market?

When we set out to build Remixer, we knew we were entering a crowded market. There are many established players in the world of site builders with giant feature lists and years of experience behind them.

These are powerful, mature options. All of that legacy can lead to feature overload and user confusion.

We saw an opportunity to focus directly on making it easy and fast to build and deploy a new website by presenting users with only what they’d need to get up and running quickly. We wanted to remove the intimidation factor from users who had never had a web presence before and who may have entered into the process cringing about the unknown. Users have told us that we’ve succeeded in making Remixer a truly easy-to-use option.

3. Turning to your managed WordPress hosting product, DreamPress…how does the Jetpack plugin help boost a user’s WordPress site? Why does DreamHost see managed WordPress as a particularly key ingredient to its next 20 years of business?

WordPress powers over a quarter of the most popular websites in the world, and we’ve certainly seen that played out in our own user metrics. It’s a powerful tool with humble beginnings, but it’s grown into a tremendously useful platform for users to publish their content, giving voices to an untold number of people who may not have otherwise known how to get themselves online.

We’ve been offering Managed WordPress for over a decade, and it’s been tremendously popular with our users. DreamPress has been one of the biggest successes in DreamHost’s history.

We are focused squarely on providing the best Managed WordPress experience on the web, and Jetpack plays a huge role in enhancing WordPress and helping us to deliver on that goal.

Jetpack brings additional functionality to WordPress in a way that feels completely embedded and natural. Among its many features, Jetpack’s free option includes access to an image CDN, in-depth website statistics, downtime monitoring, and a ridiculous number of other enhancements.

In April 2017, we began bundling Jetpack Premium (a $99/yr value) with DreamPress. The response was so overwhelmingly positive from our users that in December we began bundling the top-tier Jetpack Professional – a $299/yr value – with DreamPress.

Jetpack Professional adds a video CDN with no bandwidth restrictions, comprehensive spam protection, daily backups powered by VaultPress, automated and on-demand malware scans, access to over 200 high-quality premium themes, advanced SEO tools, and priority access to WordPress experts at Automattic. Our users have definitely taken full advantage of Jetpack and we’re proud to include it as a key feature of our world-class Managed WordPress offering.

4. Which types of hosting customers does DreamHost tend to attract (and why)? What makes them choose DreamHost over other leading web hosting and cloud players in the market?

We attract a lot of small businesses, but also a lot of programmers. We’ve always provided plenty of command line support to users who’ve wanted to build not just websites, but also great web apps. Being software developers ourselves, seeing those apps come alive is tremendously exciting for us. It hearkens back to the early days of the internet when creativity, not dollars, ruled the world of content.
People who follow the hosting industry are also sensitive to issues of consolidation. They like the fact that DreamHost is independent, private, and focused on the interests of our users, not shareholders or investors.

5. Shed some light on DreamCompute – what led to DreamHost’s foray into the cloud computing realm? Also, could you elaborate on DreamHost’s open source ethos, and how that has played out in its use of OpenStack for its cloud services?

DreamHost’s co-founder, Sage Weil, had created an open source file system called Ceph out of a personal interest and a professional curiosity in making large-scale storage more accessible and more affordable. Ceph was designed to store petabytes of data across thousands of servers using inexpensive, off-the-shelf hardware, and it does that capably and gracefully.

We built a company called Inktank to provide support services for Ceph which eventually sold to Red Hat in 2014 for $175MM. Ceph powers the technology behind DreamObjects, our object storage service, and is the storage option of choice for many OpenStack clouds. Our own public cloud service, DreamCompute, is powered by OpenStack as well.

For us, OpenStack came along at just the right time. Back then, the software that powered AWS, Azure, and many other clouds lived largely in a black box. Even today, users are left mainly in the dark when it comes to details about implementation of their cloud apps’ platforms. That runs in stark contrast to the philosophy of the Open Web, which has embraced more open technologies like Perl, Linux, Apache, MySQL, etc. OpenStack represented a collaborative and open approach to a cloud stack, and we knew it was a great fit for us.

DreamHost will always favor open source solutions to commercial software, and we owe much of our early success to the world of open source.

6. If a customer is looking at different cloud-based storage solutions, what are the benefits that he or she can get with DreamObjects over other cloud storage hosting solutions?

We’ve worked hard to build predictable, easy-to-understand flat-rate pricing into DreamObjects from the start. Competitors’ often-confusing pricing structures tend to incentivize users into storing more data to get better rates, but that can lead to vendor lock-in and can be a real deterrent to smaller developers who may not have the volume to justify higher prices. DreamObjects is a great option for users who need the flexibility of cloud storage but don’t ever want to deal with an enormous surprise in their end-of-month billing statements.

DreamObjects is API compatible with S3. In many cases, porting over an existing app to DreamObjects is as simple as changing a single hostname. We’ve seen many users do exactly that!

7. Thanks, Brett, for giving us insights about DreamHost. Wrapping up, what are the new product and service releases that DreamHost has coming up in the very near future?

We’re always cooking up new things at DreamHost! Keep an eye on DreamPress, the best in Managed WordPress, in 2018!

Categories
Hosting Interviews News

Interview: Jelastic CEO – Ruslan Synytsky on growing PaaS market and how PaaS with Containerization-as-a-Service will benefit the customers.

Containers offer a convenient form of application packaging. Combined with appropriate PaaS solutions, containers can highly automate certain IT provisioning processes.” – Ruslan Synytsky, CEO, Jelastic.

Platform-as-a-Service (PaaS) has been increasingly favored by the enterprise development teams owing to streamlined application management, lower costs and faster time to market.  In addition to PaaS, technologies like Containers also play a significant role in application development.

PaaS and containers have a symbiotic relationship with many PaaS offerings making use of Linux based containers. Containers make PaaS exist and PaaS has the capacity to put containers on the map, giving enterprise developers a complete set of new possibilities.

As a result, there have been several developments across containerization technology from leading software firm Docker to technology giant like Microsoft launching SQL servers with Linux and Docker support.

There are also some enterprises that offer a single turnkey solution combining the power of PaaS and CaaS for maximum flexibility in application development.

Jelastic – one of the leading names in cloud technology providers, combines the power of PaaS and CaaS in a single package, having the capacity to unleash full potential of cloud for ISVs, enterprises, hosting service providers and developers.

It’s not a simple hosting platform, rather a complete infrastructure with PaaS functionality, smart orchestration, and support for containers.

Recently, the company launched version 5.3 (Cerebro) of its PaaS, with support for IPv6 and the latest containerization technologies.

We got in touch with its CEO – Ruslan Synytsky to discuss the rising PaaS market, its turnkey PaaS and CaaS solution and much more. Here’s the interview:

1.Can you please share with us how Jelastic offerings have changed over the years? Specifically, how its Platform as a Service (PaaS) and Containerization as a Service (CaaS) offerings in a single package help address the complexity associated with cloud deployments?

Jelastic was founded in early 2011 and from the very beginning we positioned ourselves as a true Java platform (“Jelastic” is acronym of Java and elastic), but step by step we’ve added support for PHP, Ruby, Node.js , Python and .NET. This enables developers to deploy, scale and manage a variety of applications in the cloud, regardless of the specific programming language or software stack.

Initially, our platform was built to support containerization (Virtuozzo). So when Docker technology evolved, we already had a strong expertise in this sphere and seamlessly added support for the Docker standard within our platform.

We’ve always targeted developers through hosting service vendors – never selling our own hardware but partnering with service providers worldwide. Using this unique model we’ve managed to offer Jelastic in 28 countries. We have also created a private cloud offering, and our platform can be now installed on premise or as virtual private cloud on any IaaS. And finally, we moved to hybrid and multi-cloud options, so our clients can mix and match different cloud combinations and vendors managing a full installation within a single panel.

Every year we have about five releases adding new features and improvements based on the needs of our customers and the latest IT innovations, so we are constantly changing and enhancing our product.

2. Nearly every hosting company in the market is offering IaaS in one way or the other. How does Jelastic prove to be a differentiating factor for their hosting provider partners?

IaaS is great but not enough for the current customers. They want more automation, built-in tools for development, and management capabilities. Usual VPS becomes commodity. That is why service providers are starting to increasingly look to PaaS.

Jelastic is a robust solution combining the benefits of PaaS and CaaS in a single turnkey package. It simplifies complex cloud-based deployments by automating the creation, scaling, clustering and security updates of microservices (cloud-native) or monolithic (legacy) applications. Jelastic’s vertical scaling approach allows hosting service providers to differentiate by offering a unique pricing model where customers pay only for what they use, as opposed to paying for pre-defined capacity limits that may not be used.

The platform supports a wide range of programming languages, a variety of software stacks, Docker and different tools for continuous integration and delivery, monitoring and troubleshooting out-of-box. So customers get a full featured package, while hosting providers don’t need to worry about extra integrations.

Jelastic was designed specifically for hosting service providers, helping them grow successful hosting businesses. After installing Jelastic, hosting providers get a full kit for cluster management and customer support, as well as access to Jelastic’s professional services team. We not only deliver an out-of-the-box solution but also share with the knowledge, expertise, and skills essential for success with our partners.

We take responsibility for Jelastic deployment, configuration and necessary customization. We provide support training, marketing and sales training and access to the Jelastic Partners Portal which includes a set of unique frameworks that help our hosting service partners boost their sales and generate superior revenue.

3. How does Jelastic provide developers with control over their environment? How exactly it helps developers address challenges in migrating apps from VMs to containers?

Developers can deploy and run different applications almost instantly on Jelastic, with no code changes. The deployment can be easily performed using GIT/SVN with automatic updates, archives (zip, war, ear) right from the dev panel or via integrated plugins like Maven, Eclipse, NetBeans and IntelliJ IDEA.

Our self-service portal with user-friendly UI includes an intuitive application topology wizard, deployment manager, access to log and config files and a marketplace with prepackaged applications and functionality for team collaboration. SSH Gateway is available for access to the containers. Also, the Jelastic API and command-line interface (CLI) are open for automation of certain processes.

In terms of migration from VMs, if you have a complex legacy architecture on one server, you need to change it to a more modular approach in order to migrate to containers. A big advantage is that when moving to Jelastic-based containers, there is no need to change application code. If developers are familiar with container concept they can build complex projects with deep customization and specific interconnections across services. For unfamiliar developers, Jelastic hides all complexity and automates most of the process through pre-packaged solutions.

4. Please shed some light on Jelastic Zero Code Change Deployment.

Many Platform as a Service providers use the so-called Twelve-Factor App that provides a specific methodology for developers to follow when building modern web-based applications. This approach requires code changes when it comes to migrating legacy applications from virtual machines to containers.

With Jelastic, developers are not forced to use any specific standards and redesign applications. Also, they don’t need to modify their code to a proprietary API in order to deploy applications to the cloud. Developers can get their projects up and running in just minutes using, for example, war file or just the link to the project in GitHub. This makes the entry point easier and more seamless, reducing go-to-market time and eliminating vendor lock-in.

5. Why is containerization crucial for DevOps, what are the challenges and how does Jelastic PaaS help in overcoming them?

With containerization, DevOps teams can focus on their priorities – the Ops team preparing containers with all needed dependencies and configurations; and the Dev team focusing on the efficient coding of easily deployable applications and services.

Containers offer a convenient form of application packaging. Combined with appropriate PaaS solutions, containers can highly automate certain IT provisioning processes, thus eliminating human errors, accelerating time to market and enabling more efficient resource utilization.

6. Jelastic has recently launched the PaaS Cerebro 5.3 version – can you please tell us about its new features? How is it different from the previous versions?

Jelastic Cerebro 5.3 launched with support for Public IPv6. This most recent version of Internet Protocol fulfills the need for more web addresses, simplifies processing by routers, and eliminates NAT (Network Address Translation) issues and private address collisions. IPv6 can be used alongside IPv4, and can be easily enabled via environment topology UI or via an API.

Also, Jelastic PaaS users can now attach multiple IP addresses (IPv6 and IPv4) to a single container, adjust their number or swap them if required. This supports even greater utility from the cloud, for example, by running several websites on a single node.

In addition, we recently announced native support for Java EE 8 and Java SE 9. So Jelastic customers can easily install managed Docker containers with GlassFish 5 and Payara 5, in order to benefit from the latest Java EE 8 improvements.

7. For hosting providers considering different cloud and container platforms, what are Jelastic pricing and feature benefits that should consider?

Jelastic is a ready-to-go cloud PaaS business. We let hosting service providers differentiate among competitors by selling full featured PaaS, Docker hosting, auto-scalable VPS and wide variety of packaged clustered solutions (like Magento, WordPress, replicated SQL and NoSQL databases, etc) to their customers. Also, we provide all the needed tools to manage the platform, support customers, and monitor ROI growth. One of the core Jelastic differentiators is our pricing model for end users (developers), which allows hosting service providers to offer a “pay as you really use” system, versus charging for a capacity limit that is never hit by customer.

We practice a revenue sharing model, so our own growth totally depends on the business performance of our partners. This creates a unique atmosphere – in fact, we become one team with each of our partners, sharing the same values, striving to grow our joint business, and helping each other to conquer new markets and customers together.

8. Thanks Ruslan for giving us insights about Jelastic. Wrapping up, you recently attended JavaOne conference and a lot of other conferences are lined up for Jelastic this fall – DockerCon, OVH Summit and more. Can you give us a sneak peak of what Jelastic plans are for the next 12 months? Is there a specific strategy or demographic that Jelastic is targeting?

Yes, we are trying to actively interact with the developers community in order to hear their pains and find the best solutions. Our main strategy is to keep promulgating freedom of choice in cloud hosting – from technology toolkit to hosting locations. That’s why we’ll continue partnering with leading hosting service providers worldwide, to offer users even wider choice of data centers with a high level of local support. Also, Jelastic will continue implementing new features and improvements connected with containerization and DevOps process automation.

Categories
Interviews

“As IoT and AI solutions are growing rapidly and security challenges grow exponentially, without a doubt, the cloud world is about to change for the better. Again.” – Emil Sayegh, Hostway.

Worldwide spending on public cloud services and infrastructure is forecasted to reach $266 billion in 2021” – IDC Worldwide Semiannual Public Cloud Services Spending Guide.

IT leaders across major organizations deal with the mounting pressure of doing more with less – react more quickly to the business needs, and look for agile and scalable technologies for accommodating their growing business. Here, cloud proves to be the right technology for them in terms of cost efficiency, scalability, agility and security. It has therefore, become the main ingredient of an effective IT strategy.

However, most of the organizations face the issue of managing cloud due to lack of various requisite skills. Here, managed service providers come as a great help as they have the ability to deliver cost-effective, secure and highly reliable hosting and cloud solutions with complete support and management. Thus, the growth of cloud has also triggered the growth of managed services industry.

One such leading name in the managed services industry, Hostway Corporation, expanded its datacenter capacity by opening a new datacenter in Austin and new offices as well. Hostway has been in the industry for 19 years and provides HIPAA and PCI complaint cloud hosting solutions and managed hosting services to over 500,000 customers across the globe. We interviewed Hostway’s CEO Emil Sayegh to know about the rising opportunity for MSPs and the future plans of the company.

Check-out the interview below, where Emil discusses about the company’s newly opened datacenter in Austin, benefits of their HIPAA compliant cloud hosting solutions and his 20/20 vision of the cloud:

  1. Let’s begin with your recently announced data center. Why did Hostway choose Austin as the site for new data center?

 

We did an extensive regional study and Austin serves the central corridor of the US very well. It is not prone to natural disasters. Connectivity and latency have vastly improved for Austin; the latency is now practically indistinguishable from the DFW metro. We have two other data centers in Austin and this third data center enables us to leverage much of the infrastructure and team we have built there. Furthermore, Austin is a hub for emerging high-tech companies that want to leverage our proximity to them.

 

 

2. What are some key features of your new data center Austin III?

  • State of the Art Purpose built Data Center.
  • 24x7x365 on-site security officers
  • Two-factor authentication throughout, including biometrics
  • Diverse electrical and fiber power sources secured in underground vaults, with only underground access
  • Five barriers to physical entry to all devices
  • Multi-port interconnect architecture for 100 Gb/s routing
  • Redundant 10 Gb switch ports for every device
  • Dedicated channel for out-of-band management on network devices
  • Seven tier 1 network carriers for seamless connectivity across sites and to other hypercloud vendors
  • Dual power feeds throughout using redundant substations
  • N+1 backup power generators
  • 2N chillers with N+1 computer room air handler
  • Redundant A/B power to each rack 

3. Hostway is continuously expanding its footprints worldwide with new offices in Chicago and San Antonio. What are your goals behind the expansion?

Great question. The purpose of behind our DC growth is to serve our expanding existing customers and new customers we are always adding. We are growing to deliver the best, most cutting-edge hosting environments for our thousands of enterprise and business customers across the globe. We continue to provide exceptional service along with innovative solutions that help software companies move their workloads to the cloud while meeting the stringent security and compliance requirements.

The office expansion is really a long-term commitment to both employees and customers in those markets. We are also fully committed to being active, positive members of the communities we operate in, and we feel the office expansion is part of that commitment to provide stable opportunities for our team.

4. What are the key measures that data centers need to have in place to mitigate the adverse impacts of natural disasters?

• Location
• Redundancy in bandwidth and power providers
• N+2 Redundancy

5. Please tell us about Hostway’s HIPAA Compliant Cloud Hosting solutions. How does it benefit Healthcare industry?

Software companies, healthcare providers, and other covered entities can realize significant efficiency and operational improvements by leveraging the cloud in general versus running infrastructure themselves. These software companies and healthcare providers need to focus on their organizations’ missions – not on running infrastructure. The challenge is properly architecting a HIPAA-compliant solution that protects confidential data such as Electronic Health Records (EHR) or Electronic Medical Records (EMR). This is where Hostway’s array of compliant hosting solutions come in.

Hostway is a HIPAA-compliant cloud hosting provider – offering a range of BAA-backed and third party-reviewed HIPAA solutions. Our cloud hosting solutions include comprehensive HIPAA-compliant platforms that offer a more complete healthcare feature set than competitive offerings with an industry leading 15-minute Security Incident Response Plan.

6. You wrote about 20/20 vision in a blog post last month. What is your 20/20 vision for the cloud?

As IoT and AI solutions are growing rapidly and security challenges grow exponentially, without a doubt, the cloud world is about to change for the better. Again.

I can say confidently that the future is “multi-cloud”. That will be the reality. Public clouds dominate IT conversations today but the next phase of cloud evolutions are “multi” hybrid cloud environments. The winners in the cloud services industry will be those organizations that understand how to leverage these technologies as complete service solutions for specific customer verticals.

Both business and IT departments throughout the enterprise will need to increase their engagement with multi-cloud deployments today while planning a multi cloud technology strategy that will constitute a significant part of their IT budgets in the very near future.

Thanks Emil for chatting with us about Hostway.

Of course, anytime! 

What’s next for your company? Any plans to add services or take things to a new level?

Always! We are adding more hyper cloud managed services leveraging our security and support platforms. We are also adding more specialized security services for IoT software applications as we think IoT has the potential to make us all less secure and more vulnerable if we are not all very careful. We are also adding transformation services to help companies take their mission critical software apps from on-premises to the cloud.

Categories
Interviews

“New hyperscale cloud offerings, eCommerce and WordPress solutions and more in the offing from newly acquired Plesk” – Lukas Hertig, Plesk.

With cloud becoming the major enabler for delivering cloud-native applications, the cloud developers are finding increased importance among organizations undergoing digital transformation and thus, their market is growing at a fast pace. To be competitive, these developers need to embrace futuristic DevOps practices, with access to secure, faster and efficient infrastructure.

With its OS agnostic WebOps platform, Plesk has been leading the cloud developer market in over 140 countries with 377,000 servers, automating 11M+ websites and 19M mail boxes. Plesk cloud platform grants application developers a ready-to-code environment along with a simple and more secure web infrastructure managed by web pros and hosting companies.

Plesk’s platform has been progressively evolving to adapt to the maturing needs of today’s web professionals. The rich ecosystem of Plesk extensions provides relevant functionalities for specific audiences and allows service providers of any size to generate unique upsell opportunities as well.

In January 2016, Plesk became a separate business unit from Parallels group of companies and has been driving innovation and taking new strides each day. In the wake of recent developments with Plesk having new investors – Oakley Capital, onboard we interacted with Lukas Hertig, CMO Plesk, on how these developments will affect the existing customer base, his marketing strategies and what will be the future of Plesk.

Read on as he discusses Plesk’s vision, recent changes, new product developments, target market, and future forecast.

Lukas Hertig, CMO, Plesk
Lukas Hertig, CMO, Plesk.

“We will continue to bring innovations in Plesk, along with offering to scale web applications’ simplified (what is difficult to achieve!) as well as run Plesk everywhere, cloud agonistic – with capabilities to deploy anywhere – from a single UI with a beautiful UX. As part of this journey and with our partnership with Automattic (the company behind WordPress), we also anticipate to become a market leader in WordPress management.” – Lukas Hertig, CMO- Plesk.

  1. Before we begin, can you tell us about your association with Plesk? How has Plesk evolved as a company over the years?

    I’m in Plesk since mid 2004, quite a long time! Together with Nils Hueneke (now CEO of Plesk), we have been building up the business in Europe, Middle East and Africa for Parallels/Odin. End of 2014, I realized that I’ve learnt a lot in managing international sales and multiple sales teams, and decided to move into marketing and run the global marketing strategy for the CMO at that time, so I could learn more again. After the Odin part was sold to Ingram Micro, I was asked to build up the new marketing and alliance team at Plesk, that is now already 13 people till date and still growing!

    We had a lot of changes during these years – from building up a cloud and hosting partner channel of over 2,500 partners till date, to acquiring many different competitors over time, up to signing many large telecom and hosting brands as our valued partners. Combined with a continuous set of innovations that our highly skilled engineers provided so we could stay competitive in an ever-changing market.

    Especially interesting is the change from hosting being a small niche to the transformation to cloud in which all the mainstream IT players are taking part. Now we are in a transformation again, that all players – traditional hosting companies and traditional IT companies like system integrators or even digital agencies alike – are transforming to become so-called cloud MSPs.

    That means there will be just few players left, that can provide economies of scale in terms of providing global infrastructure – such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud or Alibaba. Everyone else will need to transform its business into focusing on real business outcomes for their end customers and really understand them very well. And make money by offering a set of managed services that nobody can compete with, by focusing also on specific verticals – and then move to the next vertical.

  2. Please give a quick recap of the new developments at Plesk to our readers. What changes can we expect in the near future in Plesk’s vision and strategies?

    Generally, I would describe it in one sentence to start with: Evolution is written with “R”. ?

    Plesk has always been a channel business with many partners, mostly in the hosting and cloud space. Our end customers are what we call Web Professionals, e.g. developers, designers, IT admins of SMBs and digital agencies.

    Some of these web professionals continue to buy from our hosting partners. But they may also buy services on AWS or Azure, and they still need the managed services layer in many cases! And this is what Plesk is focusing on with its partner channel.

    From having been a tool for automating and simplifying the lives of system administrators for many years, we are transforming into more and more of a platform, simplifying the lives of web professionals as well as cloud MSPs. We will continue to bring innovations into that direction- if that is not only being the platform to build, secure and run websites and applications. To also offering to scale web applications simplified (what is difficult to achieve!) as well as offer to run Plesk everywhere, cloud agonistic – with capabilities to deploy anywhere – from a single UI with a beautiful UX.

    As part of this journey and connected with our partnership with Automattic (the company behind WordPress), we also anticipate to become a market leader in WordPress management. With the latest release of Plesk in March, we have already added our next step, offering for the first time, a fully automated environment for staging, cloning and import of existing WordPress sites – that nobody else can offer today, especially not as a software platform that runs in the cloud, or on your own infrastructure.

  3. In the wake of the new developments at Plesk, what should the existing customers expect – are there any surprises in store for them?

    I think there will be no surprises, except positive ones! We will not stop innovating and make sure, we are adding value to all parts of the cloud and hosting value chain.

  4. Are there any new products and what will be your target market? How do you plan to differentiate your offerings from the competitors?

    As mentioned earlier, we are currently focusing full steam ahead of new offerings on hyperscale cloud providers as well as offer all features that support the WordPress community.

    What might be interesting to mention is that we are currently working on building standardized, pre-configured solutions with Plesk. Such solutions may include but are not limited to a Business Server Offering for SMBs, pre-configured eCommerce solutions or WordPress solutions for Agencies. These solutions will be made available for hyper scalers updated in a monthly cycle for web professionals as well as cloud MSPs. And for our hosting partners in form of a scripted automation toolset to setup the offerings for their end customers.

  5. You have been the CMO for a long time now.. We know that the marketing has become more customer-centric rather than product-centric in the wake of digital transformation. So, can you please throw some light for the marketers reading your interview as to how do you plan your marketing strategies? What is your formula for success and getting ROI YoY? 🙂

    Coming from a long-term sales background, knowing our customers very well and now having moved into marketing since a few years, I think there are already many learnings that I can advise for everyone to look at. I decided to provide that in a list of bullet points:

    1. Marketing is 80-90% digital and multi-channel. Traditional marketing items such as events or PR without social media and content marketing support, do not really work. We have learned that the hard way – we have been doing events and PR only, for over 11 years. Digital marketing only started a bit more than 2 years ago.

    2. Marketing is a matter of managing many different channels. Whatever channel works, push it. Whatever doesn’t, don’t do it again and drop it. There might be things working for Plesk, that don’t work for your business and vice versa.

    3. Marketing is not just generic broadcasting. You need to go very granular, have your personas defined and update them regularly, get to know your customer extremely well and do personalized, delighted and automated experiences. This is where by technical and sales background helps me a lot in managing our marketing team. We are using Marketing automation as Plesk only since we became independent, but it’s a key part of our marketing, and includes more than just email newsletters or content marketing.

    4. Marketing is very technical – learn your tools and know how to use them! But tools are not first – your customer requirements, needs and your value add matter the most.

    5. Use analytics, regularly and constantly. Learn how to interpret them – also this is very technical in many cases.

    6. Content marketing, SEO, Paid Advertising, Social Media – all of them are equally important. But in addition, very important are strategic partnerships with bigger and relevant players and building an ecosystem. There is no growth hacking by just focusing on one of them. You need to do them all.

    7. Be fast, adapt to changes and never stop learning. This is much more than just a generic statement. I constantly read new books about digital marketing, listen to interesting podcasts from our industry, talk to people who have more experience or have different experiences. Whomever wants to win, needs to be the fastest learner, adapt fastest and transform the learnings directly into execution. Then learn again, then adapt and execute again.

    8. Using Jeff Bezos of Amazon’s statement: Be stubborn on vision, be flexible on details.

    9. And if you need more motivation for your teams – read Daniel Pink’s book “Drive” about how to achieve intrinsic motivation, and not just extrinsic one.

  6. Wrapping up, tell us what is that one thing you love about Plesk?

    We are a small (200 people currently) but very global and very distributed company. We have many different cultures in our teams – that is absolutely great! We also provide a lot of freedom to our teams as long as our general vision is followed, so there is a good motivation in our teams across the whole company. At the same time, we have a very entrepreneurial way of working, that is sometimes a bit challenging in terms of work life balance, but generally, it’s a very positive environment and we are very excited about Plesk’s future!

Categories
Cloud Hosted Cloud Apps Hosting Innovation Interviews New Products News Start-Ups Technology

“HybridCluster Allows Hosters to Differentiate, Sell New Services & Drive up Margins”- Luke Marsden, HybridCluster

There is something great about products and services that are developed by industry veterans to relieve their own pain points, to figure a way around the very problems they face day in and day out, and in the process build something that is a valuable contribution to the industry as a whole. This is where necessity, ingenuity, shortage and perspicacity hold hands in order to give birth to something that has substantial impact on the work cycle of the entire industry ecosystem.

In May this year, when HybridCluster completed $1 millions fundraising and launched HybridCluster 2.0, I was supposed to prepare an Interview Questionnaire for Luke Marsden, CEO, HybridCluster. I knew little about the product at that time, but somewhere during my research work for the same, I decided that HybridCluster is not just a very interesting product, but it is a success story.

Why? I’ll let the interview do the talking. But I’ll leave you with this interesting excerpt on the company blog, where Luke talks about the genesis of HybridCluster:

Running our own hosting company since 2001 exposed all the problems. We were continuously battling hardware, software and network issues. After a few too many late-night trips to the data centre, I thought to myself: there has to be a better way. Studying theoretical computer science at Oxford University helped me crystallize my vision for an ambitious new project — one which uses ZFS, local storage, graph theory, and a perfect combination of open source components to create a platform uniquely aligned to solving the problems faced by hosters and cloud service providers.

The HybridCluster software allows applications to get auto-scaled. It can detect a spike in traffic, and rather than throttling the spike, it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers.

– Luke Marsden, CEO, HybridCluster.

Luke Marsden, CEO, HybridCluster.
Luke Marsden, CEO, HybridCluster.

Q : Let’s begin with a brief introduction of yours and a broad overview of HybridCluster.

A: Hi. 🙂 Thanks for inviting me to be interviewed! It’s really great to be on DailyHostNews.

My background is a combination of Computer Science (I was lucky enough to study at Oxford University, where I graduated with a first class degree in 2008) and a bunch of real world experience running a hosting company.

HybridCluster is really a radical new approach to solving some of the tricky problems every hosting company has while trying to manage their infrastructure: it’s an ambitious project to replace storage, hypervisor and control panel with something fundamentally better and more resilient.

In fact I have a bigger vision than that: I see HybridCluster as a new and better approach to cloud infrastructure – but one which is backwardly compatible with shared hosting. Finally, and most importantly – HybridCluster allows hosters to differentiate in the market, sell new services, drive up margins – whilst also reducing the stress and cost of operating a web hosting business. We help sysadmins sleep at night!

Q: Did the idea for a solution like HybridCluster stem from issues you faced first-hand during your decade-long experience in the web hosting industry?

A: Yes, absolutely. Without the real-world pain of having to rush off to the data center in the middle of the night, I wouldn’t have focused my efforts on solving the three real world problems we had:

The first problem is that hardware, software and networks fail resulting in website downtime. This is a pain that every hoster will know well. There’s nothing like the horrible surge of adrenaline you get when you hear the Pingdom or Nagios alert in the middle of the night – or just as you get to the pub on a Friday night – you just know it’s going to ruin the next several hours or your weekend. I found that I had become – like Pavlov’s dog – hard-wired to fear the sound of my phone going off. This was the primary motivation to invent a hosting platform which is automatically more resilient.

Other problems we had in the hosting company included websites getting spikes in traffic – so we knew we needed to invest a hosting platform which could auto-scale an application up to dedicated capacity – and users making mistakes and getting hacked – so we knew we needed to invent something which exposes granular snapshots to the end user so they can log in and roll back time themselves if they get hacked – or if they accidentally delete a file.

Q : Can you please throw some light on the modus-operandi of HybridCluster? How exactly does it help web hosts with automatic detection and recovery in the event of outages?

A: Sure. I decided early on that a few key design decisions were essential:

Firstly, any system which was going to stop me having to get up in the middle of the night would have to have no single point of failure. This is easy to say but actually quite hard to implement! You need some distributed system smarts in order to be able to make a platform where the servers can make decisions as a co-operative group.

Secondly, I decided that storage belongs near the application, not off on a SAN somewhere. Not only is the SAN itself a single point of failure, but it also adds a lot of cost to the system and can often slow things down.

Thirdly, I decided that full hardware virtualization is too heavy-weight for web application hosting. I could already see the industry going down the route of giving each customer their own VM, but this is hugely wasteful! It means you’re running many copies of the operating system on each server, and that limits you to how many customers you can put on each box. OS level virtualization is a much better idea, which I’ll talk about more later.

Basically, I designed the platform to suit my own needs: as a young hoster, I was scared of outages, I couldn’t afford a SAN, and I knew I couldn’t get the density I needed to make money with virtualization. 🙂

Q: How does OS virtualisation used by you differ from Hypervisor based Virtualisation used by other Virtualised solutions?

A: OS level virtualization (or “containers”) are simply a better way of hosting web applications. They are higher density: because each container shares system memory with all other containers, the memory on the system is more effectively “pooled”. They are better performing: there’s no overhead of simulating the whole damn universe just to run an app. And they’re more scalable, each app can use the whole resource of a server, especially when combined with the unique capability that HybridCluster brings to the table: the ability to live-migrate containers around between servers in the cluster and between data centers.

Live migration is useful because it allows things to get seamlessly moved around. This has several benefits: administrators can easily cycle servers out of production in order to perform maintenance on them simply by moving the applications off onto other servers, but also, perhaps more excitingly, it allows applications to get auto-scaled – the HybridCluster software can detect a spike in traffic, and rather than throttling the spike (like CloudLinux), it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers. This is also a unique feature.

Q: How does HybridCluster enable an end user to self-recover lost files and data from even less than a minute ago? This feature, if I’m not wrong, isn’t available with any other solution out there.

A: It’s quite simple really. Every time that website, database or email data changes, down to 30 second resolution or less, we take a new ZFS snapshot and also replicate the history to other nodes in the cluster. ZFS is a core enabling technology for HybridCluster, and we’ve built a smart partition-tolerant distributed filesystem on top of it! Each website, database or mailbox gets its own independently replicated and snapshotted filesystem.

Anyway, these replicas act both as a user-facing backup and a hot spare. It’s a simple idea, but this is actually a revolution in backup technology – rather than having a backup separate from your RAID or other replication system (where the problem with a replication system like RAID is that it will happily replicate a failure, and the problem with backups is that they take ages to restore) our “hybrid” approach to replicated snapshots kills two birds with one stone, bringing backup restore times down to seconds, and also letting users fetch files/emails/database records out of snapshots which are taken at with very fine grained accuracy.

Indeed, HybridCluster is the first hosting platform to expose this feature to the end user and we have seen a number of clients adopt our technology for this benefit alone!

Q: Is the low-cost storage system able to deliver the efficiency of high-end SANs? Also, what additional value does ZFS data replication bring into the picture?

A: I’m glad you mentioned ZFS again 🙂 One of the really nice things about being backed onto ZFS is that hosters using HybridCluster can choose how powerful they want to make their hosting boxes. Remember, with HybridCluster, the idea is that every server has a local storage and uses that to keep the data close and fast. But because ZFS is the same awesome technology which powers big expensive SANs from Oracle, you can also chuck loads of disks in your hosting boxes and suddenly every one of your servers is as powerful as a SAN in terms of IOPS. In fact, one of our recent hires, a fantastic chap by the name of Andrew Holway, did some hardcore benchmarking of ZFS versus LVM and found that ZFS completely floored the Linux Volume Management system when you throw lots of spindles at it.

I won’t go into too much detail about how ZFS achieves awesome performance, but if you’re interested, try Googling “ARC”, “L2ARC” and “ZIL”. 🙂

The other killer feature in ZFS is that it checksums all the data that passes through it – this means the end to bit-rot. Combined with our live backup system across nodes, that makes for a radically more resilient data storage system than you’ll get with Ext4 on a bunch of web servers, or even a SAN solution.

There’s lots more – call us on +44 (0)20 3384 6649 and ask for Andrew who would love to tell you more about how ZFS + HybridCluster makes for awesome storage.

Q: How does HybridCluster achieve fault-tolerant DNS?

A: Something I haven’t mentioned yet is that HybridCluster supports running a cluster across multiple data centers, so you can even have a whole data center fail and your sites can stay online!

So quite simply the cluster allocates nameservers across its data centers, so if you have DC A and B, with nodes A1, A2, B1, B2, the ns1 and ns2 records will be A1 and B1 respectively. That gives you resilience at the data center level (because DNS resolvers support failover between nameservers). Then, if a node fails, or even if a data center fails, the cluster has self-reorganising DNS as a built-in feature.

We publish records with a low TTL, and we publish multiple A records for each site: our AwesomeProxy layer turns HybridCluster into a true distributed system – you can send any request for anything (website, database, mailbox, or even FTP or SSH session – to any node and it’ll get revproxied correctly to the right backend node (which might dynamically change, eg during a failover or an auto-scaling event). So basically under all failure modes (server, network, data center) we maximize the chances that the user will quickly – if not immediately – get a valid A record which points to a server which is capable of satisfying that request.

In other words HybridCluster makes the servers look after themselves so that you can get a good night’s sleep.

Q: How do you see the future of data center industry?

A: That’s an interesting question 🙂 I’ll answer it for web applications (and databases + email), specifically.

Personally I see cloud infrastructure as a broken promise. Ask the man (or woman) on the street what they think cloud means, and they’ll probably tell you about increased reliability, better quality of service, etc. But really all that cloud does today is provide less powerful unreliable infrastructure with which software engineers are expected to build reliable software on top of. That’s a big ask!

My vision is for a fundamentally more reliable way of provisioning web applications – one where the underlying platform takes responsibility for implementing resilience well, once, at the platform level. Developers are then free to deploy applications knowing that they’ll scale well under load, and get failed over to another server if the physical server fails, or even if the whole data center goes pop.

I think that’s the promise of PaaS, and my vision is for a world where deploying web applications gets these benefits by default, without millions of sysadmins in hosting companies all over the world having to get paged in the middle of the night to go fix stuff manually. Computers can be smarter than that, it’s just up to us to teach them how. 🙂

Q: Tell our readers a bit about the team HybridCluster?

A: Since we got funded in December 2012 we’ve been lucky enough to be able to grow the team to 9 people, and I’m really proud of the team we’ve pulled together.

We’re a typical software company, and so unfortunately our Dave to female ratio is 2:0. That is, we have two Daves and no females (but we’re working on that!). Anyway, some highlights in the team are Jean-Paul Calderone, who’s the smartest person I’ve ever met, and the founder of the Twisted project. Twisted is an awesome networking framework and without Twisted – and JP’s brain – we wouldn’t be where we are today. Also on the technical side, we’ve got Rob Haswell, our CTO, who’s a legend, and doing a great job of managing the development of the project as we make it even more awesome. We’ve also just hired one of JP’s side-kicks on Twisted, Itamar Turner-Trauring, who once built a distributed airline reservation system and sold it to Google.

We’ve also got Andriy Gapon, FreeBSD kernel hacker extraordinaire, without whom we wouldn’t have a stable ZFS/VFS layer to play with. Dave Gebler is our Control Panel guru and we’re getting him working on our new REST API soon, so he’ll become a Twisted guru soon 😉 And our latest hire on support, Marcus Stewart Hughes, is a younger version of me – a hosting geek – he bought his first WHMCS license when he was 15, so I knew we had to hire him.

On the bizdev side, we’ve got Dave Benton, a legend in his own right, who’s come out of an enterprise sales background with IBM, Accenture and Star Internet, he’s extremely disciplined and great at bringing process into our young company. Andrew Holway is our technical pre-sales guy, previously building thousand-node clusters for the University of Frankfurt, and he loves chatting about ZFS and distributed systems. He’s also great at accents and can do some pretty awesome card tricks.

Q: To wrap up, with proper funding in place for development of the products, what’s in the bag for Q3 and Q4 of 2013?

A: We’re working on a few cool features for the 2.5 release later this year: we’re going to have first class Ruby on Rails and Python/Django support, mod_security to keep application exploits out of the containers, Memcache and Varnish support. We’re also working on properly supporting IP-based failover so we don’t have to rely on DNS, and there are some massive improvements to our Control Panel on their way.

It’s an exciting time to be in hosting 😉 and an even more exciting time to be a HybridCluster customer!

Thanks for the interview and the great questions.

Categories
Cloud Cloud News Hosted Cloud Apps Hosting Interviews News Start-Ups Technology Web Hosting

“We Provide Constant Visibility Into a Company’s Cloud Spending”- Mat Ellis, Cloudability

Self-service, one of the greatest features cloud computing, makes lives of enterprises jumping on the cloud bandwagon easier in more than one ways. It makes it possible for them to have software, security, infrastructure and many other full-blown enterprise-capable applications up and running in minutes. All their website data could sit in the cloud on infrastructure they don’t even own or operate.

But such ease-of-use and flexibility also brings with it less visibility of resources, less control over computing, unintended expenses and ballooned bills. A big majority of companies are unaware of the services and resources deployed by them that they don’t even need or aren’t properly utilizing.

Large number of workloads, lax monitoring and lack of usage alert thresholds lead to bill shocks for such companies who find keeping track of what they’re spending too herculean a task. Add to it unwieldy spreadsheets with heavy amount of billing data and a decentralized financial view and they’re up for a nightmare.

We spoke to Mat Ellis, co-founder and CEO, Cloudability on the importance of avoiding unexpected or unnecessary cloud computing expenses and how Cloudability helps companies do so.

Cloudability is designed to be used by anyone in an organization, from engineers and IT/Ops pros to finance, management and C-suite team members. This means that cloud cost and usage data is accessible by everyone who needs it without having to mess around with spreadsheets and powerpoint presentations.

– Mat Ellis, CEO, Cloudability.

Mat Ellis, CEO, Cloudability
Mat Ellis, CEO, Cloudability.

Q: Let’s begin with a broad overview of Cloudability.

A: The cloud has spurred a revolutionary increase in growth. Pinterest, Instagram, Netflix are all growing at unheard of rates. But managing cloud resources during that kind of growth presents a huge problem.

Cloudability helps companies overcome the growing pains so that they can continue to take full advantage of the cloud revolution and grow at unheard-of speeds.

We provide comprehensive tools in an easy-to-use SaaS format that measure cloud infrastructure costs and usage throughout your organization, allowing our users to:

  • Find cloud resources that they’re paying for but not using.
  • Track their cloud spending and usage trends over time, and plan for the future.
  • View their spending in the context of important business actions, e.g. “we spend $2 per user per month on the cloud”.
  • Predict and track ROI on large cloud purchases like AWS Reserved Instances.

Q: Can you please throw some light on the modus-operandi of Cloudability? How exactly does it help organizations in mitigating their cloud costs?

A: Most people assume that Cloudability’s primary benefit is in mitigating cloud costs. The reality is that we have a lot of customers who would like nothing more than to increase their cloud usage.

We provide constant visibility into a company’s cloud spending so that Operations, IT, Finance and Management are always confident that any dollar spent on the cloud is a dollar well-spent. It’s critical to develop that level of assurance when you’re spending so much money on a variable resource like cloud computing.

Q: Is there any difference in how Cloudability works with respect to the service provider? I mean does your monitoring process differ between a client having Git Hub and another one one having Amazon?

A: Ideally, we’d love to provide the same level of visibility for all of the cloud services our customers use. But we’re sometimes restricted by the amount of data that the service provider provides.

For instance, AWS provides hourly billing and usage data with a lot of granularity. That’s allowed us to build out a very deep analytics interface to track and analyze a user’s AWS costs and usage. For other providers, though, we only have access to daily or even monthly billing data.

Regardless of the level of integration, though, our users love having one report at the end of the month that contains all of their cloud spending.

Q: Founded in 2011, Cloudability passed $250M in cloud spending in a very short span of time. When you look back, what is one factor that you would say contributed most to your growth?

A: Our growth is a product of two factors:

First, we were the first company to recognize that the cloud was going to radically change the way companies managed their IT spending. That gave us a big head start and helped us reach a lot of cloud users right when they started feeling the pain.

Second, the cloud market itself is growing and our customers are growing with it. There are companies that we started working with two years back, who had one team of a few people working on AWS. Now their entire company is moving to the cloud … and creating their own Cloudability accounts.

Q: Do you see any major shift in the market’s perception of ‘Cloud’ during these 3 years?

A: Absolutely. At a fundamental level, the cloud has gone from “It’s coming. Are you ready?” to “It’s here. Are you on board?”.

While there’s still some discussion to be had about things like maximizing security in some applications or uptime in others, the conversation is now less about whether or not you should use the cloud and more about how you should be using the cloud.

That’s why we’re seeing revenue predictions of $20B/year by 2020 for cloud services like AWS. It’s also why we’re seeing traditional IT companies like VMWare and Oracle coming out with their own public cloud services.

Q: You recently launched your new product-AWS Cost Analytics. Now it has a lot of features which I think can be of real value to heavy AWS users. So can you talk about in detail about how each one of them helps in enhanced monitoring and analysis of cloud costs:

AWS tag mapping:Cloudability AWS tag mapping

AWS tag mapping is, hands-down, our most popular feature. Ever since AWS started allowing their users to tag resources, those users have been desperate for an easy way to apply those tags to spending and usage data.

Cloudability’s AWS tag mapping lets finance and management teams break down their AWS costs from one or more accounts by cost centers like department, project or client. Meanwhile, operations and engineering teams can see usage and optimization data broken out along functional lines like environment, team or role.

AWS Product-level spending reports:

Seeing your AWS spending by product (EC2, S3, RDS, etc.) is hard enough with one account. If you have more than one account, it becomes a huge monthly chore involving hours with a spreadsheet. Cloudability automates the entire process by pulling in billing data from all of your AWS accounts and giving you an easy way to see that spending broken down by product, time frame or anything else you can think of.

Segmented reports for multiple AWS accounts :

In larger companies using AWS, it’s pretty common to find multiple AWS accounts that have been set up for different teams, departments, projects, etc. This is the old school way of breaking down the company’s costs.

Inevitably, though, you need to look at the costs across all of the company’s accounts broken down by another dimension. Suppose you have three different AWS accounts for three different products. Within each product’s account, you have a dev environment, a testing environment and a production environment. So how do you show what your company is spending on testing across all three of your products?

Now it’s simple. You can tag your resources in all three accounts as environment=dev, environment=testing, or environment=production. Then, with all three accounts added to Cloudability, you can view your aggregate spending broken down by the tag “environment.” Now your finance and management teams can make better, more informed decisions with a better understanding of their costs.

Customized Metric Reports:

Let’s face it. There are a lot of AWS dashboards out there; plenty of static reports that can show you a simple analysis of your company’s AWS cost and usage data. But what happens when you need to see the data in a new way? Broken down by a new dimension?

Cloudability’s AWS Cost Analytics tool was built on a foundational principle that the best person to design a report for your organization is you. You know which questions need to be answered better than anyone else. So, instead of just creating another AWS dashboard, we’ve created a platform that allows IT, DevOps and Engineering pros to create any report they need.

Cloudability AWS Cost Analytics

Q: Since you’ve over 6000 clients, can you tell us three things most organizations over spend on when it comes to cloud service?

A: First, they often don’t accurately know their own spending. Finding out who is actually using the cloud can be challenging, even in smaller organizations. And even when you think you know, finding out exactly what is being spent is very time consuming to keep up to date and accurate. We often see spend drop 20% when new users sign up, simply because they know their spending for the first time. You can’t control what you aren’t measuring.

Secondly, engineers are notorious for over-provisioning. They will readily turn on new services but sometimes aren’t so diligent in turning them off when they are no longer needed. And when you ask what all these services are being used for, or what will happen when you turn them off, you can get a mouthful of technical details that can be hard to parse. So make sure you know how to gauge what’s actually being used. (Fortunately, tools like ours can point out services that appear under-utilized.)

Finally, the biggest and most spectacular overages are often caused by human error and/or malice. Scripts that turn servers on but not off and security compromises are the leading causes. In these cases we’ve seen overages in the hundreds of thousands of dollars, but they’re easy to detect if you’re watching your costs on a daily basis.

Q : With the complexity and size of data increasing, many organizations have started taking cloud expenditure seriously . We’ve thus seen a sudden boom in analysis tools like yours in a last couple of years. With such a bracing competition out there, how do you plan to stand out?

A: At Cloudability, we’ve always differentiated ourselves along three lines:

  • Fully customizable reporting: With our Cost and Usage analytics tools, users can mix, match, slice and dice their cost and usage data any way that they need to see it. This gives a much greater level of flexibility than traditional dashboard tools with pre-defined reports.
  • Organization-wide ease-of-use: Cloudability is designed to be used by anyone in an organization, from engineers and IT/Ops pros to finance, management and C-suite team members. This means that cloud cost and usage data is accessible by everyone who needs it without having to mess around with spreadsheets and powerpoint presentations.
  • Cloud-agnostic cost management: Cloudability has always worked to support as many IaaS, PaaS and SaaS vendors as possible. This means that organizations can track, manage and report their entire cloud spending from one tool.

Q: What do you have to say to Non-technology companies who probably aren’t very conscious when it comes to budget-allocation for the cloud?

 

A: Cloud cost management is no different than any other area of budgeting. It takes three steps:

  • You have to monitor your cloud spending daily so that you can react to changes before they get out of hand.
  • You have to be able to segment your cloud spending based on cost and profit centers so that you know what impact it’s having on your bottom line.
  • And you have to be able to communicate your cloud spending quickly and effectively to anyone in your organization who needs it.

Q: Tell our readers a bit about team Cloudability?

A: A picture is worth a 1000 words. Here’s a team photo from our last off-site which was held Sunriver Resort in July:

Cloudability Team Group Photo

Q: To wrap up, what changes can we expect in the cloud computing market in 2013 and your footprint in it?

A: 2013 is the year of the enterprise. The world’s largest organizations are embracing the cloud, and their usage and spending is only increasing. It’s not uncommon anymore to talk to companies that are spending $1M-$2M per month on their cloud infrastructure. With that much money at stake, it’s more critical than ever to be able to quickly and effectively track, manage and communicate a company’s cloud spending throughout the month.

Cloudability is stepping up to that challenge with a whole new suite of enterprise-ready features, like multi-user support and account grouping and views, that are designed to make it easier and easier to manage cloud spending in large organizations.

Categories
Datacenter Hosting Interviews News Technology

“When it Comes to DR and BCP, Preventive Maintenance is Absolutely Critical”- Brad Ratushny, INetU

As Hurricane Sandy ravaged East Coast last year, causing more than $68 billion in damage , it also brought significant attention to the disaster readiness of data centers. Various data center facilities struggled with power problems amid widespread flooding and utility outages, immediately impacting the businesses that rely on those resources.

While several data centers who downplayed the adverse geological contingencies were caught completely off-guard, various state-of-the-art facilities with meticulous DR planning also found it difficult to stay up and running in face of the unprecedented scale of Sandy.

The storm exposed gaping holes in the the scope of existing disaster plans and hard-pressed the need of better monitoring measures, preemptive testing , backup power, and several other improvements.

In the wake of upcoming hurricane season, I spoke to Brad Ratushny, Director of Infrastructure, INetU on how the Allentown, PA based company stayed online and ensured zero downtime for its clients during Sandy. Brad, an industry veteran with over 15 years of experience, is in charge of all data centers in the INetU global footprint.

In this interview, he talks about several proactive steps data centers can take, including the testing of all backup systems, review of emergency procedures, final generator checks and having back up fuel vendors on standby to mitigate the effects of natural disasters.

Q: Let’s begin with a brief introduction of yours and a broad overview of INetU’s services.

A: My name is Brad Ratushny and I’m the Director of infrastructure at INetU. I have been with the company for 15 years and in my current role specifically for about 5 years. We at INetU have been providing dedicated business hosting and cloud services for more than 15 years. We pride ourselves on being the experts in engineering complex hosting solutions and having first-hand experience on compliance based projects in the US and throughout our global footprint.

Q: Please tell us about INetU’s Data Center facilities and the Infrastructure and Technical specs you have in place from the Disaster Recovery and the Business Continuity POV.

A: We have a total of 10 data centers in Seattle, Pennsylvania, Ashburn, Virginia, Amsterdam and Allentown, where we are headquartered. We’re a very risk-averse company and always try to ramp up whatever we do because we like to be a little bit more safe. For example, while the typical run-time for generators in the industry is 24 hours, our fuel tanks have capacity for 48 hrs of run-time.

In addition to N+1 UPSs and generators, we also have an additional portable units to make sure that we’re always safe in case of power outages.

Also, even though we’re not in the hot zone for lightning strikes, we have lightning rods on the roof of our facilities for deflecting thunder storm outages and the lightning surge suppression in our data centers is UL listed.

In addition to having the proper data center infrastructure in place, we also take care of proper maintenance of the building envelope, roof etc. to keep everything up and running.

Q: What would you say are some of the key measures that data centers need to have in place to mitigate the adverse impacts of the natural disasters? Also, can you share with us some examples to show how you approach data center disaster planning?

A: The biggest thing in my mind from the Director of infrastructure perspective is testing, testing and testing. When we’re talking about DR and BCP, preventive maintenance is absolutely critical. DR and BCP plans aren’t something that just sit on someone’s bookshelf. They’re living, breathing documents that’re often the blueprint for how people adapt to emergencies. I actually rely on emergency preparedness plans quite a bit.

Largely, systems are absolutely critical, but the people that operate those systems are even more important. So training your team for specific situations is very important. What I mean is, when you train for Hurricane Sandy, you look at possible power disruption, cooling disruption and disruption to your various other infrastructure components; the same training applies to other potential natural disasters as well, but you need to look at what disaster you could be faced with in the near term and accordingly adjust, train and be prepared for that.

Lets’s look at what happened at the east coast when Hurricane Sandy hit last year. A lot of data centers on the coast had their disaster plans ready. They had up to 5 days of fuel on site to run their generators. Now I know of a few examples where the generators didn’t run at all because the fuel wasn’t maintained properly. They had the fuel but it wasn’t rotated and maintained timely, so it started clogging up the generators, causing them to fail.

Also, what most people didn’t expect to happen was that the fuel trucks and fuel services couldn’t get the fuel they needed on the coast, because the fuel delivery up and down the coastline became a challenge in itself. So instead of getting their fuel along the coast, which is the usual practice, they started coming inland to areas like ours, where we were concerned about a fuel shortage ourselves.When we came to know about this possibility, we went out and started setting up contracts with people in Midwest and Western Pennsylvania to make sure we won’t be impacted.

Fortunately, it never got to that point, but it’s a good example of how you can’t just live by your plan and need to think everything through level by level to respond to a disaster effectively. And that’s why I said that your DR and BCP plans are living, breathing documents. You need to train on them properly and make sure that you’re adaptable to emergencies as you go through.

Q: How do INetU’s Disaster Recovery capabilities ensure continuity in the event of a site-level failure?

A: Our primary focus is keeping our mission critical websites up and running, but plenty of our clients do actually use us for disaster recovery for their primary site. Again, I’ll use Hurricane Sandy as an example. During Sandy, we were just on-boarding a DR client and working with them to get the configuration setup. Their main configuration was somewhere along the coast and unfortunately, they were very heavily impacted at their primary facility. Even though they weren’t fully live here yet, they physically brought us their equipment. Now colocation is not a focus for us, but when we have an enterprise client who we are working with, we are flexible and we do whatever we need to do to help them.

So the client walks into our lobby with mud on their shoes and server in hand and we help them get their business back up and running. Ever since that they have actually been using us for their primary site and they use theirs as a backup site. So we are proud that we go an extra mile to help our clients and that’s what we are here for.

Q: How does INetU ensure that their data centers remain energy efficient?

A: We are constantly striving to increase efficiency in each area of our operation. In addition to aiding our clients move to the cloud, where it makes sense for them, we also monitor and implement efficiencies in our data center operations as well as efficiencies in the building envelope as a whole. These efficiencies can include replacing aged, less efficient infrastructure with newer more efficient hardware, decommissioning underutilized equipment, or increasing insulation to improve a facilities R-value.

Q: Wrapping up, since you’ve been in the industry since long, what according to you are some of the questions organizations should ask while choosing an enterprise data center from the security POV?

First, you need to make sure that the data center has all the relevant industry certifications like PCI DSS compliance, SOC3, SOC2/SSAE 16 Type II and ISAE 3402. Then you need to go to a level deeper than that and check the physical security of the data center, security equipment, processes etc. You also need to check if they have proper procedures to control, monitor, and record access to the facility. For example, some legacy data centers are relatively unmanaged and don’t have 24×7 security, which is fine for certain applications, but definitely not for enterprise environments.

So you need to look into all these factors, weigh them and think further how they apply to your business specifically.

Page 2 of 8
1 2 3 4 8