Categories
Innovation News Technology

Microsoft shifts it focus from Mobile-first Cloud-first world towards the Intelligent Cloud

Microsoft at its build 2017 conference revealed its vision of the future workplace that will run with the power of Artificial Intelligence, IoT and robotics.

Microsoft’s keynotes focused on server less and edge computing models that will bring new capabilities in almost every industry from healthcare to manufacturing.

CEO Satya Nadella spearheaded several announcements keeping in mind the developer community. He also went ahead advising developers to make good use of technology to help society advance.

He said, “We should empower people with technology – inclusive design can be an instrument for inclusive growth.” He also added, warning the developers to be more responsible, “It is up to us to take responsibility for the algorithms we create.

He considers IoT as the main data driver that needs to be analytically used for extracting maximum benefits.

The platform shift is all about data. When you have, an autonomous car generating 100GB of data at the edge, the AI will need to be more distributed. People will do training in the cloud and deploy on the edge – you need a new set of abstractions to span both the edge and the cloud,” he said.

Microsoft always emphasized on a cloud-first, mobile-first world, but with this event, it shifts its focus towards cognitive solutions and AI. Developers will get access to four new cognitive services apart from the 25 existing ones, of which three will be user customizable.

We are moving from mobile first, cloud first to a world made from an intelligent cloud and an intelligent edge.” said Nadella.

Thus, the move towards intelligent cloud will be the new mantra at Microsoft.

The session also talked about the use of edge computing with Microsoft unveiling the Azure IoT Edge which is a Windows and Linux cross-platform and can run on devices smaller than Raspberry PI.

Nadella talked about the how AI could be used to identify objects and people and bring more automation in the future workplace. Their demo involving a heart patient walking around with sensors attached signified the level of AI application in future. The sensors were capable enough to send notification to the nurse if the patient felt uneasy at any point of time.

Though the keynotes seem to be promising, but their application would certainly include a lot of responsibility on the part of developers.

Categories
Cloud Innovation News Technology

Increased IoT adoption opens new door of opportunities for providers

Increase in the internet penetration rate, growing interconnected devices and BYOD trend in the organizations along with an increase in the new technologies is leading to growth in the Internet of Things (IoT). The term IoT is used for representing a system of interconnectedness of machines, computing devices via internet, allowing easy exchange of data.

As per the data released by Microsoft in the Hosting and Cloud summit 2017, IoT is impacting all sizes of businesses from manufacturing, infrastructure, logistics, transportation to government and is helping organizations with insightful data and analytics. It also promotes asset management and remote monitoring. The coming years can see several inventions around IoT, like smart lighting to take control over energy consumption, smart parking and more.

Overall, it has a wide use cases around plethora of industries who are using big data to better manage their operations, reduce cost and increase efficiency. As per a study by 451 Research, technologies that are going to change the business landscape in the next 5 years include the Internet of Things, artificial intelligence, robotics, wearables, nanotechnology and 3D printing.

Although there are no major players in the IoT landscape at present but few mention worthy names include Microsoft, IBM, Cisco, At&T, Oracle and more. There are enough opportunities for the cloud service providers to capitalize IoT by providing low cost solutions and eliminating customer bottlenecks. However, concerns for security will continue to revolve around IoT applications and platforms and, organizations will spend considerable time in choosing the right IoT provider.

Categories
Articles Cloud Innovation News Technology

Microsoft brings co-authoring in Excel to accelerate team collaboration

Microsoft introduced live co-authoring feature for Windows Excel through a beta update launched on Tuesday. The collaborative editing tool will help testers to co-author same file from inside the app.

Its support for real-time collaboration is its step to keep up with Google’s G-suite offering. The latter has been offering real-time collaboration for years and putting efforts to attract enterprise customers who already use Microsoft Office.

By allowing users to edit documents in real-time, it brings a change to client applications, which were earlier restricted to single user editing.

The feature has been a major focus of Office 2016, which Microsoft launched nearly a year and a half ago, in which it enabled co-authoring on the desktop version of Word, and later expanded it to PowerPoint as well.

To start using the co-authoring feature, the user must have file saved in one of Microsoft’s online file storage systems (OneDrive, OneDrive for Business or SharePoint Online). The profiles of the users will be displayed on the top, by clicking on which a user will be taken to where the other person is currently editing.

Microsoft also introduced a new beta feature of auto-saving the documents on Windows, which will auto-save any Word, Excel or PowerPoint document in company’s cloud storage services while users edit them. Other productivity industry giants like Google, Apple etc. have already implemented these features in their tools.

Microsoft for now, has rolled out the new feature for only people who are the part of the Office Insider Program’s Fast ring, and expects to extend reach to new users soon. The people who don’t have the beta version of the desktop app can use Excel Online along with Excel for Windows Phone, Android and IOS (which is still in company’s roadmap).

Categories
Cloud Event Innovation News

Microsoft to Aid MSPs and Cloud Providers to Gain in New Era of Digital Transformation

The 2017 Microsoft Cloud and Hosting Summit that is being held in Bellevue, Washington, is attended by nearly 500 partners from different industry verticals, who are here to explore various opportunities in the digital transformation era.

Aziz Benmalek, Microsoft VP – Worldwide hosting & Managed Service providers, who gave a presentation at the summit, shared findings from a Microsoft commissioned study conducted by 451 Research, in a blog.

The study reveals that huge opportunities exist for Microsoft cloud partners to aid customers with managed services and implementations in the hybrid cloud in the era of digital transformation.

More than ever before, customers are looking to a single trusted advisor to provide transformation-oriented managed services and hybrid implementation. Customers are looking to service providers to not only transform IT but also transform their entire business – to rewire the building and support new requirements, all while keeping the lights on.
– Melanie Posey, Vice President, 451 Research.

The service providers have to play a very important role in hybrid solutions implementation and management as digital transformation progresses. Approximately 90 percent of customers that were surveyed are willing to pay a hefty premium to service providers for implementation and management of their hybrid cloud environment.

Also, the survey found that for the third consecutive year, Microsoft Azure is the top choice for hybrid cloud by the users for IaaS platforms. Benmalek says, “At Microsoft, we are working with tens of thousands of partners to joint deliver not only hybrid offerings but a full portfolio of cloud services. We have seen double-digit growth in Hosting & Managed Services over the past five years straight, with no signs of a slowdown.

Microsoft’s 2016 study had revealed that companies rely less on physical infrastructure and more on the digital infrastructure. This trend is visible this year too and ‘beyond infrastructure’ shift is continuing so services will account for 74% of hosting/cloud spend in 2017, up from 71% as reported in 2016. This trend demonstrates increase in the opportunities for service providers.

As per the study, increased hybrid cloud adoption among North American respondents is driven by the factors like choice and flexibility, ability to extend IT resource capacity of infrastructure on-premise, and increased ROI on existing IT investment on infrastructure on-premise, with the ability of using public cloud for new workloads.

But the new hybrid cloud environment is more complex and therefore, the customers are relying on service providers for managing these services.

Managed services are becoming king. Customers are looking for service providers to run the whole stack for them,” Benmalek said.

The service providers who are going to benefit as per the research would be hosting providers, security service providers, IaaS providers, system integrators and other IT MSPs. As per the 451 research and Microsoft, 57 percent of respondents said that they would be relying on a managed hosting provider or an MSP, 54 percent on IaaS provider of public cloud, 53 percent on security service provider and 51 percent would rely on an IT outsourcing or consulting or system integrator for help in their multi-cloud or hybrid cloud journey.

Regarding hybrid cloud vendors, 39 percent of the respondents said that they would like to buy hybrid cloud by obtaining different services from multiple vendors and 36 percent voted to go for a single service provider who can provide an integrated solution of multiple vendors.

In 2016, the most sought after managed services included archiving; backup and recovery; CDN and managed networking; disaster recovery; and monitoring and alerting services. This year too, the services will be the same but will include round the clock support services from the service providers.

Also, enterprises will be seeking professional services in helping them realize digital transformation goals, including modernization of applications and integration of traditional systems and business processes with SaaS capabilities.

Read more about Microsoft Cloud and Hosting Summit here.

Categories
Cloud Cloud Cloud News Conf/Webinar Event Innovation News Start-Ups Technology

Final Speakers Line-up and Schedule for OpenNebula Conference 2013 Announced

The OpenNebula Project today announced the complete line-up of speakers and the final schedule for the first OpenNebula Conference taking place from 24 to 26 of September in Berlin.

The 3 day conference will have 6 keynotes, 2 tracks with 16 regular talks, 1 hands-on tutorial and 3 community sessions.

Keynote speakers include Ignacio M. Llorente, OpenNebula; Daniel Concepción, Produban – Bank Santander; Steven Timm, Fermilab; André von Deetzen, Deutsche E-Post; Jordi Farrés, European Space Agency; Karanbir Singh, CentOS Project; and Ruben S. Montero, C12G Labs.

The talks are organized in three tracks about user experiences and case studies, integration with other cloud tools, and interoperability and HPC clouds and include speakers from organizations like CloudWeavers, Terradue, NETWAYS, INRIA, BBC, inovex, AGS Group, Hedera, NetOpenServices, CESNET or CESCA.

The Hands-on Tutorial will show users how to build, configure and operate their own OpenNebula cloud.

The Lightning Talks, Hacking and Open Space Sessions will provide an opportunity to present and discuss burning ideas, and meet face to face to discuss development.

Here is more information on the OpenNebula Conference.

Earlier this year, OpenNebula released OpenNebula 4.0.

Categories
Domain Ecommerce Hosting Hosting Innovation New Products News Start-Ups Technology

“POP Helps Users to Set up & Manage Their Online Presence Easily”- Juan Diego Calle,.CO Internet

Yesterday saw launch of POP.CO, a new platform that helps businesses get online instantly. The company behind POP.CO, .CO Internet S.A.S, is well known for coming up with fresh ideas to help entrepreneurs and start ups establish successful online presence.

POP.CO, the company’s latest endeavor, is a bundle of online services that comes with a .CO domain name, a custom POP Page and an email address powered by Google Apps.

Entrepreneurs, startups and small businesses who’re not tech-savvy enough don’t need to hire a web developer to build a professional web presence or spend extra time finding a Web address, hosting provider, email account, etc. as a central POP account handles all that. It also enables users to add and install proprietary or third-party apps to their domains directly through the platform.

Currently in Beta, POP is free for 15 days. Once the free trial ends, users can subscribe for $5 per user per month for the whole POP package.

We had a quick Q & A with Juan Diego Calle, CEO, .CO Internet, on additional features of POP.CO and the future roadmap. If you’ve any more questions or feedback about POP, you can get in touch with the team directly at deepthoughts@pop.co.

We are continuing to enhance POP.co in a variety of ways, including the addition of many more integrated apps, as well as new tools and services to help our users to build and grow their online presence with ease.
– Juan Diego Calle, CEO, .CO Internet.

Juan Diego Calle, CEO, .CO Internet.
Juan Diego Calle, CEO, .CO Internet.

Q: How much flexibility does POP provide in terms of choosing the hosting provider and third party apps?

A: While the basic POP bundle includes the domain name, a Google Apps account, and the POP page, the user has the ability to use any hosting provider they choose.

POP includes a simple DNS editor that allows the user, with a click of a button, to set the appropriate DNS records to point their domain to a number of popular hosting applications. If the user’s hosting provider isn’t included in the simple editor, POP has an advanced DNS editor where the user can create and manage any DNS record.

Additionally, for those advanced users who wish to use another party’s DNS we provide the user with the means to change their name servers completely. The goal of POP is to provide users with the tools to easily set up and manage their online presence while at the same time offering them the flexibility to use any hosting provider they choose.

Q: Although the official blog post makes a passing mention to it, how exactly can POP platform be customized for new gTLDs?

A: The POP platform has been designed to work with any TLD. Some customization of the business rules may be necessary for some TLDs, but this is to be expected and something we are prepared to handle. We believe POP is excellent fit for many of the new TLDs which will be competing for distribution channels. POP provides them with a simple, fully managed solution for controlling their own distribution.

Q: Once the beta period ends, and the service is available for $5/mo, what additional value will POP provide other than being a centralized app-marketplace?

A: In coming months we will continue to enhance POP in a number of different ways. We’ll be adding additional integrated apps that will provide users with more choices, as well as some new tools and services. The focus will remain on apps that improve the user’s online presence, such as website builders, communication platforms, and marketing tools.

Q: Will a .CO domain (which is relatively expensive) be also available in the $5/mo POP package once the beta period ends? Also, can users choose to retain their .CO and discontinue with POP?

A: Yes. The .CO domain will be available in the $5/month POP package once the beta period ends.

POP.co offers a bundled package that includes the .CO domain name, a Google Apps account, and the basic POP web page. After the free trial, a user can choose to subscribe for only $5/mo. There are no additional costs for the domain name, as the cost is bundled into the monthly fee. This is an excellent value as the retail price of a simple Google Apps account is $5 per month/per user, without any of the extra benefits that come with a POP.co account, including the custom .CO domain name, the POP web page, and integrated access to additional third party apps.

While POP does not support users who wish to simply have the domain name without the other bundled services, users do have the option to transfer their domain name to the registrar of their choice – or to cancel their account at anytime. There are no annual commitments.

Q: What criterion will POP be judging third party apps submitted by developers on for integration in the platform? Also, what additional value can developers expect to derive from submitting their apps at POP than other Self-Serve Marketplaces?

A: We have not yet established a firm set of criteria to be used when reviewing third party apps. In general, we’re looking for awesome apps that directly address the needs of our target market, and can be fully integrated via an API. While the details are still evolving, you can learn more here.

Q: What extra enhancements and features can users expect to be integrated in the POP platform with time?

A: As noted above, we are continuing to enhance POP.co in a variety of ways, including the addition of many more integrated apps, as well as new tools and services to help our users to build and grow their online presence with ease. We will also be focusing on supporting the unique needs of our new TLD clients, as they roll out instances of the POP platform in their own namespaces.

Categories
Conf/Webinar Event Innovation News Start-Ups

.ME to Hold Startup Competition Together With Spark.me; Winner Will Exhibit in TechCrunch’s Startup alley

The .ME Registry today announced that it is holding the Spark.me Startup Competition together with Spark.me, a regional conference to be held at the Hotel Splendid in Budva, Montenegro from September 26-27, 2013.

The winner will get to exhibit in the TechCrunch Disrupt Startup Alley, which will be held in April/May 2014 in New York City.

We have a long history of involvement with TechCrunch and it is a great pleasure for us to help startup from our region get exposure in New York.
– Natasa Djukanovic, Director of Marketing, The .ME Registry.

The Spark.me Startup competition will be judged based on innovation, diversity, the power of the idea, the startup’s growth potential and the implementation strategy.

The preselection process will be held until September 12, when 10 best StartUp teams will be invited to present at the SparkMe conference.

On September 26, 2013, all selected teams will give five-minute presentations of their projects to a panel of judges. Finalists will present for an additional 5 minutes on September 27, 2013 and the winner will be announced on the final day of the conference.

The winner will receive 2 round-trip tickets to NYC, Hotel accommodation for three days (2 people), a Start-Up Alley booth at TechCrunch Disrupt 2014 and two tickets for the three-day TechCrunch Disrupt conference.

All startups founded after April 2012 with less than $2.5 million in funding are eligible to apply, regardless of where they are registered.

Applications need to be submitted by September 10, 2013 at 23:59:59 UTC . Only existing functional projects will be taken into consideration (in alpha or beta phase).

All startup founders participating in the competition will also have access to advisers, mentors, workshops and a network of investors.

spark.me Startup Competition

“We are very interested in the startup community and are so excited about all of the development and innovation in the region. We have a long history of involvement with TechCrunch and it is a great pleasure for us to help startup from our region get exposure in New York,” said Natasa Djukanovic, Director of Marketing, The .ME Registry.

Earlier this year, .ME released five exclusive and premium domain names, named Around.ME, Hire.ME, Fund.ME, Find.ME and For.ME, available for acquisition by applicants with clever ideas for online business.

Categories
Cloud Hosted Cloud Apps Hosting Innovation Interviews New Products News Start-Ups Technology

“HybridCluster Allows Hosters to Differentiate, Sell New Services & Drive up Margins”- Luke Marsden, HybridCluster

There is something great about products and services that are developed by industry veterans to relieve their own pain points, to figure a way around the very problems they face day in and day out, and in the process build something that is a valuable contribution to the industry as a whole. This is where necessity, ingenuity, shortage and perspicacity hold hands in order to give birth to something that has substantial impact on the work cycle of the entire industry ecosystem.

In May this year, when HybridCluster completed $1 millions fundraising and launched HybridCluster 2.0, I was supposed to prepare an Interview Questionnaire for Luke Marsden, CEO, HybridCluster. I knew little about the product at that time, but somewhere during my research work for the same, I decided that HybridCluster is not just a very interesting product, but it is a success story.

Why? I’ll let the interview do the talking. But I’ll leave you with this interesting excerpt on the company blog, where Luke talks about the genesis of HybridCluster:

Running our own hosting company since 2001 exposed all the problems. We were continuously battling hardware, software and network issues. After a few too many late-night trips to the data centre, I thought to myself: there has to be a better way. Studying theoretical computer science at Oxford University helped me crystallize my vision for an ambitious new project — one which uses ZFS, local storage, graph theory, and a perfect combination of open source components to create a platform uniquely aligned to solving the problems faced by hosters and cloud service providers.

The HybridCluster software allows applications to get auto-scaled. It can detect a spike in traffic, and rather than throttling the spike, it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers.

– Luke Marsden, CEO, HybridCluster.

Luke Marsden, CEO, HybridCluster.
Luke Marsden, CEO, HybridCluster.

Q : Let’s begin with a brief introduction of yours and a broad overview of HybridCluster.

A: Hi. 🙂 Thanks for inviting me to be interviewed! It’s really great to be on DailyHostNews.

My background is a combination of Computer Science (I was lucky enough to study at Oxford University, where I graduated with a first class degree in 2008) and a bunch of real world experience running a hosting company.

HybridCluster is really a radical new approach to solving some of the tricky problems every hosting company has while trying to manage their infrastructure: it’s an ambitious project to replace storage, hypervisor and control panel with something fundamentally better and more resilient.

In fact I have a bigger vision than that: I see HybridCluster as a new and better approach to cloud infrastructure – but one which is backwardly compatible with shared hosting. Finally, and most importantly – HybridCluster allows hosters to differentiate in the market, sell new services, drive up margins – whilst also reducing the stress and cost of operating a web hosting business. We help sysadmins sleep at night!

Q: Did the idea for a solution like HybridCluster stem from issues you faced first-hand during your decade-long experience in the web hosting industry?

A: Yes, absolutely. Without the real-world pain of having to rush off to the data center in the middle of the night, I wouldn’t have focused my efforts on solving the three real world problems we had:

The first problem is that hardware, software and networks fail resulting in website downtime. This is a pain that every hoster will know well. There’s nothing like the horrible surge of adrenaline you get when you hear the Pingdom or Nagios alert in the middle of the night – or just as you get to the pub on a Friday night – you just know it’s going to ruin the next several hours or your weekend. I found that I had become – like Pavlov’s dog – hard-wired to fear the sound of my phone going off. This was the primary motivation to invent a hosting platform which is automatically more resilient.

Other problems we had in the hosting company included websites getting spikes in traffic – so we knew we needed to invest a hosting platform which could auto-scale an application up to dedicated capacity – and users making mistakes and getting hacked – so we knew we needed to invent something which exposes granular snapshots to the end user so they can log in and roll back time themselves if they get hacked – or if they accidentally delete a file.

Q : Can you please throw some light on the modus-operandi of HybridCluster? How exactly does it help web hosts with automatic detection and recovery in the event of outages?

A: Sure. I decided early on that a few key design decisions were essential:

Firstly, any system which was going to stop me having to get up in the middle of the night would have to have no single point of failure. This is easy to say but actually quite hard to implement! You need some distributed system smarts in order to be able to make a platform where the servers can make decisions as a co-operative group.

Secondly, I decided that storage belongs near the application, not off on a SAN somewhere. Not only is the SAN itself a single point of failure, but it also adds a lot of cost to the system and can often slow things down.

Thirdly, I decided that full hardware virtualization is too heavy-weight for web application hosting. I could already see the industry going down the route of giving each customer their own VM, but this is hugely wasteful! It means you’re running many copies of the operating system on each server, and that limits you to how many customers you can put on each box. OS level virtualization is a much better idea, which I’ll talk about more later.

Basically, I designed the platform to suit my own needs: as a young hoster, I was scared of outages, I couldn’t afford a SAN, and I knew I couldn’t get the density I needed to make money with virtualization. 🙂

Q: How does OS virtualisation used by you differ from Hypervisor based Virtualisation used by other Virtualised solutions?

A: OS level virtualization (or “containers”) are simply a better way of hosting web applications. They are higher density: because each container shares system memory with all other containers, the memory on the system is more effectively “pooled”. They are better performing: there’s no overhead of simulating the whole damn universe just to run an app. And they’re more scalable, each app can use the whole resource of a server, especially when combined with the unique capability that HybridCluster brings to the table: the ability to live-migrate containers around between servers in the cluster and between data centers.

Live migration is useful because it allows things to get seamlessly moved around. This has several benefits: administrators can easily cycle servers out of production in order to perform maintenance on them simply by moving the applications off onto other servers, but also, perhaps more excitingly, it allows applications to get auto-scaled – the HybridCluster software can detect a spike in traffic, and rather than throttling the spike (like CloudLinux), it can burst that application to a full dedicated server by moving other busy things on that server onto quieter servers. This is also a unique feature.

Q: How does HybridCluster enable an end user to self-recover lost files and data from even less than a minute ago? This feature, if I’m not wrong, isn’t available with any other solution out there.

A: It’s quite simple really. Every time that website, database or email data changes, down to 30 second resolution or less, we take a new ZFS snapshot and also replicate the history to other nodes in the cluster. ZFS is a core enabling technology for HybridCluster, and we’ve built a smart partition-tolerant distributed filesystem on top of it! Each website, database or mailbox gets its own independently replicated and snapshotted filesystem.

Anyway, these replicas act both as a user-facing backup and a hot spare. It’s a simple idea, but this is actually a revolution in backup technology – rather than having a backup separate from your RAID or other replication system (where the problem with a replication system like RAID is that it will happily replicate a failure, and the problem with backups is that they take ages to restore) our “hybrid” approach to replicated snapshots kills two birds with one stone, bringing backup restore times down to seconds, and also letting users fetch files/emails/database records out of snapshots which are taken at with very fine grained accuracy.

Indeed, HybridCluster is the first hosting platform to expose this feature to the end user and we have seen a number of clients adopt our technology for this benefit alone!

Q: Is the low-cost storage system able to deliver the efficiency of high-end SANs? Also, what additional value does ZFS data replication bring into the picture?

A: I’m glad you mentioned ZFS again 🙂 One of the really nice things about being backed onto ZFS is that hosters using HybridCluster can choose how powerful they want to make their hosting boxes. Remember, with HybridCluster, the idea is that every server has a local storage and uses that to keep the data close and fast. But because ZFS is the same awesome technology which powers big expensive SANs from Oracle, you can also chuck loads of disks in your hosting boxes and suddenly every one of your servers is as powerful as a SAN in terms of IOPS. In fact, one of our recent hires, a fantastic chap by the name of Andrew Holway, did some hardcore benchmarking of ZFS versus LVM and found that ZFS completely floored the Linux Volume Management system when you throw lots of spindles at it.

I won’t go into too much detail about how ZFS achieves awesome performance, but if you’re interested, try Googling “ARC”, “L2ARC” and “ZIL”. 🙂

The other killer feature in ZFS is that it checksums all the data that passes through it – this means the end to bit-rot. Combined with our live backup system across nodes, that makes for a radically more resilient data storage system than you’ll get with Ext4 on a bunch of web servers, or even a SAN solution.

There’s lots more – call us on +44 (0)20 3384 6649 and ask for Andrew who would love to tell you more about how ZFS + HybridCluster makes for awesome storage.

Q: How does HybridCluster achieve fault-tolerant DNS?

A: Something I haven’t mentioned yet is that HybridCluster supports running a cluster across multiple data centers, so you can even have a whole data center fail and your sites can stay online!

So quite simply the cluster allocates nameservers across its data centers, so if you have DC A and B, with nodes A1, A2, B1, B2, the ns1 and ns2 records will be A1 and B1 respectively. That gives you resilience at the data center level (because DNS resolvers support failover between nameservers). Then, if a node fails, or even if a data center fails, the cluster has self-reorganising DNS as a built-in feature.

We publish records with a low TTL, and we publish multiple A records for each site: our AwesomeProxy layer turns HybridCluster into a true distributed system – you can send any request for anything (website, database, mailbox, or even FTP or SSH session – to any node and it’ll get revproxied correctly to the right backend node (which might dynamically change, eg during a failover or an auto-scaling event). So basically under all failure modes (server, network, data center) we maximize the chances that the user will quickly – if not immediately – get a valid A record which points to a server which is capable of satisfying that request.

In other words HybridCluster makes the servers look after themselves so that you can get a good night’s sleep.

Q: How do you see the future of data center industry?

A: That’s an interesting question 🙂 I’ll answer it for web applications (and databases + email), specifically.

Personally I see cloud infrastructure as a broken promise. Ask the man (or woman) on the street what they think cloud means, and they’ll probably tell you about increased reliability, better quality of service, etc. But really all that cloud does today is provide less powerful unreliable infrastructure with which software engineers are expected to build reliable software on top of. That’s a big ask!

My vision is for a fundamentally more reliable way of provisioning web applications – one where the underlying platform takes responsibility for implementing resilience well, once, at the platform level. Developers are then free to deploy applications knowing that they’ll scale well under load, and get failed over to another server if the physical server fails, or even if the whole data center goes pop.

I think that’s the promise of PaaS, and my vision is for a world where deploying web applications gets these benefits by default, without millions of sysadmins in hosting companies all over the world having to get paged in the middle of the night to go fix stuff manually. Computers can be smarter than that, it’s just up to us to teach them how. 🙂

Q: Tell our readers a bit about the team HybridCluster?

A: Since we got funded in December 2012 we’ve been lucky enough to be able to grow the team to 9 people, and I’m really proud of the team we’ve pulled together.

We’re a typical software company, and so unfortunately our Dave to female ratio is 2:0. That is, we have two Daves and no females (but we’re working on that!). Anyway, some highlights in the team are Jean-Paul Calderone, who’s the smartest person I’ve ever met, and the founder of the Twisted project. Twisted is an awesome networking framework and without Twisted – and JP’s brain – we wouldn’t be where we are today. Also on the technical side, we’ve got Rob Haswell, our CTO, who’s a legend, and doing a great job of managing the development of the project as we make it even more awesome. We’ve also just hired one of JP’s side-kicks on Twisted, Itamar Turner-Trauring, who once built a distributed airline reservation system and sold it to Google.

We’ve also got Andriy Gapon, FreeBSD kernel hacker extraordinaire, without whom we wouldn’t have a stable ZFS/VFS layer to play with. Dave Gebler is our Control Panel guru and we’re getting him working on our new REST API soon, so he’ll become a Twisted guru soon 😉 And our latest hire on support, Marcus Stewart Hughes, is a younger version of me – a hosting geek – he bought his first WHMCS license when he was 15, so I knew we had to hire him.

On the bizdev side, we’ve got Dave Benton, a legend in his own right, who’s come out of an enterprise sales background with IBM, Accenture and Star Internet, he’s extremely disciplined and great at bringing process into our young company. Andrew Holway is our technical pre-sales guy, previously building thousand-node clusters for the University of Frankfurt, and he loves chatting about ZFS and distributed systems. He’s also great at accents and can do some pretty awesome card tricks.

Q: To wrap up, with proper funding in place for development of the products, what’s in the bag for Q3 and Q4 of 2013?

A: We’re working on a few cool features for the 2.5 release later this year: we’re going to have first class Ruby on Rails and Python/Django support, mod_security to keep application exploits out of the containers, Memcache and Varnish support. We’re also working on properly supporting IP-based failover so we don’t have to rely on DNS, and there are some massive improvements to our Control Panel on their way.

It’s an exciting time to be in hosting 😉 and an even more exciting time to be a HybridCluster customer!

Thanks for the interview and the great questions.

Categories
Cloud Innovation New Products News Technology

Aquilent Launches New Cloud Resource Management Portal: Olympus Powered by Aquilent™

Aquilent today announced launch of its Cloud Resource Management Portal, Olympus Powered by Aquilent.

The new solution has been developed solely for federal agencies and  provides an application and environment centric model for managing the cloud, helping meet the increasingly complex requirements of the Federal Cloud First Policy.

Olympus enables federal customers to bridge the gap between native cloud tools and their individualized cloud infrastructures and significantly reduces the need for cloud expertise so that federal organizations can readily take full advantage of their cloud resources while remaining focused on their mission.

Olympus  works within any FISMA or DIACAP compliant environment, integrating with the Cloud Service Provider (CSP) security model and within the agency’s environment.  It  empowers the federal user to take full control of all cloud assets in one easy-to-use portal from the ground up.

Customers choosing to utilize Aquilent’s cloud hosting environment can now take full advantage of Olympus and can:

Olympus enables our customers to rapidly adopt the cloud while saving money from both hosting and the often hidden costs of the labor needed to support the cloud. With this fully secure, easy to use portal, our customers can further delegate cloud management tasks to the people who most need to perform them, ensuring an accurate audit trail along the way.
– Mark Pietrasanta, CTO, Aquilent.
  • Manage all cloud resources in consistent, unified and logical categories.
  • Reduce system administration costs with the intuitive, full-featured interface.
  • Easily view all billing and utilization activity down to the resource level.
  • Ensure the branded agency’s cloud portal is Section 508 compliant.
  • Deploy the portal securely within the agency’s cloud environment.
  • Easily manage access and fully audit all activities with a single sign-on.
  • Manage and reduce environmental costs through scheduled tasks.
  • Integrate business systems and processes through the flexible portal platform.

“Our federal customers have one main focus, and that is to serve citizens effectively as set forth within their missions,” stated David Fout, CEO, Aquilent.

“The cloud should be a strategic enabler and with Olympus Powered by Aquilent, our customers will be able to leverage, monitor and manage their cloud assets in support of their missions in the most efficient and cost-effective manner,” he added.

“Olympus enables our customers to rapidly adopt the cloud while saving money from both hosting and the often hidden costs of the labor needed to support the cloud,” said Mark Pietrasanta, CTO, Aquilent.

“With this fully secure, easy to use portal, our customers can further delegate cloud management tasks to the people who most need to perform them, ensuring an accurate audit trail along the way,” he added.

Here is more information on  Olympus Powered by Aquilent.

Categories
Cloud Cloud News Hosted Cloud Apps Hosting Innovation New Products News Technology

Progress Software Announces General Availability of Progress Pacific Application Platform-as-a-Service

Progress Software Corporation today announced general availability of its application platform-as-a-service (PaaS) offering, Progress Pacific.

Progress Pacific application platform-as-a-Service (aPaas) has new data connectivity and application capabilities and provides developers with flexibility that helps them avoid vendor lock-in and develop and deploy key business applications anywhere they want, including openstack, on-premise, hybrid, etc.

The key data connectivity and application capabilities include:

Progress DataDirect Cloud service: The new Progress DataDirect Cloud service enables applications to easily integrate data from popular SaaS, relational database, Big Data, social, CRM and ERP systems. It has a standards-based SQL interface that works with any ODBC or JDBC compatible application.

Progress DataDirect Cloud service

It’s clear that the move to aPaaS is gaining momentum as time-to-market and data capabilities in new apps are key requirements for developers and end users. Pacific addresses these needs with rich data, visual design and open deployment capabilities in a single platform.
– John Goodson, CPO, Progress Software.

John Goodson, Chief Product officer, Progress Software
John Goodson, Chief Product officer, Progress Software.

The connection management service uses a single standards-based interface to execute SQL queries against a wide range of cloud data sources, thereby allowing applications to be built in less time.

Progress Rollbase: Progress Rollbase infrastructure has been improved and now offers an enhanced user experience for standards-based business applications created with drag-and-drop tools using any web browser.

It also includes support for OpenEdge 11.3, fine-grained locking for published applications and integration of Progress DataDirect JDBC drivers.

Progress OpenEdge 11.3: The new integrated Progress DataDirect Cloud (BPM) and business rules management system (BRMS) capabilities simplify application customization using Progress OpenEdge 11.3 development software.

Flexible processes, rules and workflows can be easily configured to meet business requirements while greatly accelerating productivity.

“Customer and partner response to the Progress Pacific launch in June has been extremely positive and we are now delivering on our vision,” said John Goodson, Chief product officer, Progress Software.

“It’s clear that the move to aPaaS is gaining momentum as time-to-market and data capabilities in new apps are key requirements for developers and end users.”

“Pacific addresses these needs with rich data, visual design and open deployment capabilities in a single platform,” he added.

Here is more information on Progress DataDirect Cloud service and Progress Rollbase.

Page 1 of 14
1 2 3 14