Cloud Cloud News Hosted Cloud Apps Hosting Innovation Interviews New Products News Partnership Technology

Anturis to Provide Comprehensive IT Infrastructure Monitoring For ReadySpace’s Business Customers

Anturis Inc. today announced that its cloud-based monitoring & troubleshooting solution has been selected by ReadySpace, a cloud and managed hosting services provider to provide comprehensive IT infrastructure monitoring solutions for its business customers. ReadySpace will now offer Anturis’ solutions as an integrated addition to its Managed Service packages.

Anturis came to market early this year in beta phase and recently emerged from beta with the launch of its commercial product availability.

In an interview with DailyHostNews today, Sergey Nevstruev, CEO, Anturis, said:

ReadySpace selected Anturis for its IT infrastructure monitoring and troubleshooting needs because of two key reasons:

  • infrastructure model representation (i.e. users work with real-life entities like a database or a web-server, rather than with separate metrics) – which makes Anturis best suited to the needs of server monitoring.
  • integration with Parallels Automation platform via APS – which let Anturis have rather deep integration (including billing and customer portal) very quickly.
Anturis is the perfect addition to our suite, and will be utilized as the primary tool set of our technical team for monitoring and troubleshooting our customers’ various IT services.
– David Loke, CEO, ReadySpace.

David Loke, CEO, ReadySpace.
David Loke, CEO, ReadySpace.

ReadySpace will deploy Anturis to support its over 5,000 business customers, primarily in the Asia Pacific region, and especially in Singapore and Hong Kong.

Offered in ReadySpace’s Managed Service platform, the new Anturis IT monitoring solution delivers:

  • Website Monitoring: Monitoring the uptime and performance of websites. It checks for DNS, SSL, HTTP, network and application-level problems.
  • Server Monitoring: Keeps an eye on servers’ resources utilization and software performance (CPU, memory, swap, disk, OS processes, log files and more).
  • Web App Monitoring: Uses synthetic transactions to ensure visitors can successfully sign up, search, check out, log in and otherwise interact with your website.
  • MYSQL Monitoring: Watches over key database performance metrics, such as slow query rate, connection usage, Innodb buffer pool usage, and more.
  • Network Monitoring: Keeps watch over LAN and WAN connectivity and network devices using ICMP ping, SNMP and TCP checks and other network protocols.

The commercially launched version of Anturis comes with several new features and enhancements, including numerous GUI and usability enhancements, such as improved wizards. It also includes extended diagnostic data for faster troubleshooting, such as presenting the list of top five CPU-consuming processes at the time of CPU overload.

“As an international leader of cloud and managed hosting services, we are always looking for ways to improve and enhance our Managed Services,” said David Loke, CEO, ReadySpace.

“Anturis is the perfect addition to our suite, and will be utilized as the primary tool set of our technical team for monitoring and troubleshooting our customers’ various IT services.”

DailyHostNews used Anturis earlier this year to monitor its own IT Infrastructure and found it an extremely feature rich, affordable, compressive and promising product.

Cloud News Hosted Cloud Apps Hosting Innovation New Products News Technology

Softaculous 4.2.7 Released with Automated Backups Feature and a Few Bug Fixes

Softaculous today announced the release of Softaculous 4.2.7. The news comes three months after the release of Softaculous 4.2.2, which came with Auto upgrade feature, Password Strength Checker etc.

Softaculous 4.2.7 comes with Automated Backups of installations and a few bug fixes. Some of the key changes are:

  • Automated Backups : Users can now choose from the install form/edit installation page to take automated backups (i.e. Daily, Weekly, Monthly).
  • Admins can now choose to prepend a prefix for Admin Username field for their endusers on the script install form.
  • All scripts requirements can be checked from the Softaculous Admin panel to determine which scripts might not work on the server.
  • Added settings in Softaculous Admin panel to individually disable New installation, Remove installation, Edit installation, Backup installation emails sent to the endusers.
  • Added setting to disable Backup installations emails from the Softaculous Enduser panel -> Email Settings.
  • Admins can define the PERL version, MySQL version, list of enabled extensions, list of enabled functions in the pre install hook to override the requirements check done while script installation.
Softaculous 4.2.7 adds support for automated backups for an installation for existing and new installations.
Pulkit Gupta, Founder and CEO, Softaculous.Mr. Pulkit Gupta

Bug Fixes in Softaculous 4.2.7:

  • The cPanel user’s email was stored by default in user details file in Softaculous. Now it will be stored only if a user has specifically changed the email address in Softaculous Enduser panel -> Email Settings.
  • Fixed the Auto Install API and SDK to work with Directadmin.
  • Script installation was not allowed if the user wanted DB prefix to be empty. This is now fixed.
  • On the Task List page hyperlink was displayed even for the paths. This is fixed now.
  • Softaculous used to send update notification for the installation(s) s) that were manually updated i.e. when Softaculous records were outdated. Softaculous will now find the actual version of installation(s) and if it does not match with the existing records the Softaculous records will be updated and notification(s) will be sent only if installation(s) are still outdated.

Softaculous Auto Installer apps library has 300 scripts and 1115 PHP Classes.

“Softaculous 4.2.7 adds support for automated backups for an installation for existing and new installations.” said Pulkit Gupta, Founder and CEO, Softaculous .

“The automated backups installation feature added in Softaculous 4.2.7 will let the users to select to take regular backups i.e. Daily, Weekly or Monthly.” said Brijesh Kothari, Sales Head, Softaculous.

“So in case the user wants to revert any changes they will have a recent backup to restore from,” he added.

Softaculous Auto Installer integrates into popular hosting control panels like cPanel, Plesk, DirectAdmin, H-Sphere and Interworx and allows the user to install any application by the click of a button and also upgrade when a new version is available. The categories of scripts include blogs, forums, micro blogs, wikis, social networking, image galleries, ERP, Project Management, Educational, etc.

The applications available in the Softaculous apps library are WordPress, b2evolution, StatusNet, Drupal, Joomla, Concrete5, phpBB, MyBB, SMF, bbPress, Coppermine, Gallery, Mediawiki, DokuWiki, TikiWiki, Elgg, Dolphin, OpenX, SquirrelMail, LimeSurvey, Piwik, Mantis, SugarCRM, WHMCS, PrestaShop, Magento, CraftySyntax, osTicket, Moodle, Claroline, etc.

Softaculous is available in two versions-Free product and Premium product having 54 and 309 apps respectively.

Here is our Interview with Pulkit Gupta, Founder and CEO, Softaculous from earlier this year.

Cloud News Hosting News Partnership Technology

Beyond Hosting Announces Partnership With Softaculous; Offers Softaculous Auto Installer

VPS and Dedicated server provider Beyond Hosting today announced partnership with Softaculous.

Implementation of Softaculous to our servers will let our valued customers access a variety of regularly updated apps. Softaculous offers a better, faster and more elegant interface and has more features as compared to other products which we reviewed. – Tyler Bishop, CEO, Beyond Hosting.

With this development, Beyond Hosting will now offer Softaculous Auto Installer to  clients on their servers and easy of installing any script from over 300+ scripts in the Softaculous scripts library.

Softaculous one click installer allows users to Install, Upgrade and Manage their applications easily without requiring any technical knowledge.

Beyond Hosting clients will now have access to the Softaculous script library which covers various categories like blogs, forums, micro blogs, wikis, social networking, image galleries, ERP, Project Management, Educational, etc.

“Implementation of Softaculous to our servers will let our valued customers access a variety of regularly updated apps,”said Tyler Bishop, CEO, Beyond Hosting.

“Softaculous offers a better, faster and more elegant interface and has more features as compared to other products which we reviewed. Softaculous also allows users to access ratings, reviews, and software demos which helps them to choose a suitable application based on their requirements,” he added.

Earlier this year, Softaculous announced launch of two new products – Softaculous Remote and Softaculous Enterprise.

Cloud Cloud News Hosted Cloud Apps Hosting New Products News Technology Web Hosting

Enterhost Launches New Cloud-based IT Product Line; Streamlined Web site and Brand

Web hosting provider Enterhost today launched a new cloud-based IT product line that is aimed at providing businesses with phone (Lync 2013 Enterprise Voice) and email systems (Exchange 2013), virtualization, server and application hosting, backup, and disaster recovery.

“Our expertise in cloud technology provides a platform for hostingphone and email systems, servers and applications in the cloud, as well as offering scalable backup, disaster recovery and virtualization solutions. Together, these products enable our customers to create as virtual an office as they desire. – Kevin Valadez, co-president, Enterhost.

Enterhost has also launched a revamped brand and web site, which is being unveiled online this week. As a part of repositioning, Enterhost team has got rid of language and knowledge barriers that may impede a customer from making the right decision for his or her company.

The company has also created a mascot to support companies that need expert IT solutions but lack a resident technical expert.

“Some business owners benefit from having a human element they can relate to as they navigate the often-murky waters of making decisions around IT for their companies. We designed Tech Tom to be inclusive and straight-talking, guiding our customers to the right technology for their businesses.” said Ben Tiblets, co-president, Enterhost.

“Our goal is to provide any business, whether it has 20 desktops or 1,000, with practical, effective ways to leverage technology to compete in today’s marketplace,” he added.

With the new product line-up, applications and servers that were formerly required to run in the office can now run in the cloud.

EnterHost Cloud Services

“We are always looking at ways we can improve our offerings, and as our customers have recovered from the recent economic downtown, we realized they needed different solutions to improve their operations,” said Kevin Valadez, co-president, Enterhost.

“From small- to medium-sized businesses to our enterprise-level clientele, our customers were asking for functional products that heighten collaboration, protect assets and streamline costs,” he added.

Founded in 2000, Enterhost provides a wide range of cloud-based IT business solutions, and specializes in Windows applications for office phone and business email, as well as cloud storage, backup, disaster recovery, virtualization and colocation.

For more information, click here.

Articles Cloud Datacenter Hosted Cloud Apps Hosting New Products News Technology

Anturis IT Infrastructure Monitoring Software Delivers Great Features at Affordable Prices

Small and medium-sized businesses (SMBs) are growing, and with this growth, they’re changing the way they use technology to run their businesses. However, with their purse strings drawn tight and not having a clear road map of IT implementation, SMBs generally invest in IT in a phased manner. The problem manifolds for them as th? present market only offers solutions that are either expensive and bloated or open-source with a great need for fine-tuning and customizing. With limited knowledge of technology and their budgets tight, none of the options seem feasible for SMBs.

This is where a product like Anturis comes in. Promising features at par with enterprise-level IT monitoring softwares, without the exorbitant prices that generally accompany them, Anturis sounds like a pretty solid service that can play a strategic role in businesses of all sizes, helping companies do more with less to realize cost savings and profitability.


Configuring Anturis is an easy task and requires little effort. Once you create your account, you can choose which component of your infrastructure you want to monitor, including server, desktop, database, firewall, printer, application etc. You can do so easily on the main page of the Anturis control panel.

We chose to monitor one of our servers based in Mumbai, India. Once we chose it as a component, it became listed on the left side of the control panel as well as appeared as an icon in the main area, clicking on which showed the results of the monitoring session (more on it later).

Anturis Infrastructure Monitoring Software review

In order to monitor servers, a Private Agent needs to be installed on your system. The Private Agent sends collected data to the Anturis service via a secure HTTP connection to the Anturis cloud data center. A special agent software first needs to be downloaded from the Locations&Agents tab in the Anturis Console. The Agent installation (as well as all its subsequent maintenance) can be done easily with its own desktop GUI . Your user account credentials are required to connect the agent to the Anturis service. Once we supplied the correct information, the agent got connected and became visible in the list of ‘Available agents’ at the Anturis control panel.

Anturis Infrastructure Monitoring Software review 1

The modus-operandi of the Anturis Private Agent is better shown in the figure below:

Anturis Private Agent Installation

After installing the Private Agent, you need to create a person who’ll be notified in case of problems. Notifications can be delivered via email, SMS or phone call, and the notification method depends on problem severity. Once you assign a newly created person to be responsible for an application, you can also configure various other factors to customize the monitoring as per your needs, like the Error threshold, Monitor Period (the time interval between two subsequent checks of a monitored object) etc.

Over the next few days, we received regular daily status emails including a couple of warnings about problems with the server. We were also able to access the daily log report form the Anturis control panel.


As mentioned earlier, configuring Anturis is straightforward and doesn’t require technical skills. It monitored our web services effectively, from both user and infrastructure perspective, and provided us with truly valuable information and immediate feedback about errors that could have cost us significant time, resources and money, had they not been addressed timely.

Delivering features at par with enterprise-level IT monitoring softwares, without the grotesquely high prices that generally accompany them, Anturis comes across as an affordable, compressive and a very promising product, especially for the SMBs who cannot afford to spend an exorbitant amount of money on monitoring IT services.

Anturis was in the Beta at the time we used it for monitoring out infrastructure and is now available commercially with various new features and enhancements. There is also a free plan available for those who want to use limited services. For more details, click here.

Cloud Cloud News Hosted Cloud Apps Hosting Innovation New Products News Technology

INetU Adds INetU Application Traffic Firewall to its Managed Security Suite

Managed hosting and cloud provider INetU has added INetU Application Traffic Firewall to its Managed Security Suite.

The INetU Application Traffic Firewall goes beyond classic Web Application Firewalls and provides an additional layer of critical protection for businesses by thwarting attacks targeting application vulnerabilities, says the official press release.

The world needs to access your applications, but your business needs to ensure that users and data are safe. The INetU Application Traffic Firewall is the lock at your front door protecting this information against would-be attackers looking to exploit your applications as a way in.  –Andrew Hodes, Director of technology, INetU.

Once the  troublesome  threat profiles are detected,  The INetU Application Traffic Firewall can be configured to handle the situation based on the specific needs of the customer.

Some of the key features of  the INetU Application Traffic Firewall are:

  1. It  collects updates covering known attackers and new threat patterns, and  adjusts to provide up-to-date protection.
  2. It  assists clients in meeting the requirements for SOX regulatory compliance, healthcare HIPAA/HITECH and payment processing PCI DSS standards.
  3. It learns and adapts to the way applications work normally, and then blocks abnormal behavior.
  4. It monitors and logs all activities and provides businesses with  a clear understanding of suspicious activity through the INetU Client Center.
  5. It addresses SQL injection attacks, cross-site scripting and data theft.
  6. It is managed by INetU along with its other Managed Security Suite offerings and is priced cost-effectively.

“The world needs to access your applications, but your business needs to ensure that users and data are safe,” said Andrew Hodes, Director of technology, INetU.

“The INetU Application Traffic Firewall is the lock at your front door protecting this information against would-be attackers looking to exploit your applications as a way in,” he added.

The complete INetU Managed Security Suite offering now includes Managed firewalls with VPN,  Security Incident Event Management,  Intrusion Protection and Detection Systems,  Dual Factor Authentication, Vulnerability Scanning,  File Integrity Monitoring and the INetU Application Traffic Firewall.

For more information, click here.

Cloud Cloud News Hosted Cloud Apps Hosting Innovation New Products News Technology

Mimecast Launches Cloud-based Email Archiving, Continuity and Security services for Office 365

Cloud-based email management provider Mimecast has launched a new range of cloud services and service bundles that’ll enhance Microsoft Office 365’s archiving, continuity and security capabilities.

Mimecast is the ultimate cloud companion if you are using Office 365 or looking to migrate. We bring a broad set of additional capabilities that are good for end-users who want email to be there 100% of the time and need to access their archive anywhere at any time. – Grant Hodgkinson, Product Director, Mimecast.
Grant Hodgkinson, Product Director, Mimecast

The new services are ideal for IT administrators who’re currently on Office 365 or are considering migrating to it. New service bundles will help these IT administrators address extended compliance, governance, security and email continuity capabilities.

The services include Mimecast Gateway Services for Office 365, Mimecast Continuity Services for Office 365, Mimecast Archive Services for Office 365, and Microsoft UEM Express and Enterprise for Office 365.

  • Mimecast Gateway Services for Office 365 are a set of email gateway tools that give email administrators various email management capabilities like Data Leak Prevention, email encryption, document services, Enhanced Message Routing to facilitate migrations and support hybrid deployment scenarios, inbound and outbound spam and malware protection for email and attachments. With Mimecast Gateway tools end-users can have security features embedded directly in Outlook.
  • Mimecast Continuity Services for Office 365 assure 100% availability for email, backed by Mimecast’s Service Level Agreement in case of a service outage.
  • Mimecast Archive Services for Office 365 add a compliant and read-only archive to all Office 365 plans that delivers advanced ediscovery, legal hold and scalable ‘bottomless’ email retention tools. End-users can access their full personal email archive, including documents through Microsoft Outlook, smartphones, tablets and browsers.
  • Mimecast UEM Express for Office 365 combines Mimecast Gateway Services for Office 365 and Continuity Services for Office 365 in a unified email management platform delivered from a single management interface.
  • Mimecast UEM Enterprise for Office 365 is Mimecast’s full suite of cloud-based email management tools, including email security, email archiving, email continuity, mobile services and file archiving.

Some of the key use cases of the new services are:

  • End-users can have anytime anywhere secure access to their full email and file archive from smartphones, tablets or the web.
  • Mimecast offers the added assurance of email availability in case of any service issues with Office 365.
  • Users can leverage Mimecast’s Large File Send capability to send and receive files up to 2GB without leaving Outlook.
  • IT administrators can support safe and efficient migration of their mail environment from on-premises Microsoft Exchange and other email services to Office 365.
  • Mimecast facilitates the management of hybrid on-premise and cloud environments, where customers may wish to have some mailboxes managed on-site and others in the cloud.

“Office 365 is proving to be a popular consideration for many enterprises looking to reduce complexity and cost by using the cloud. If you want to use the cloud for productivity, messaging and collaboration, Office 365 is good, but with the addition of these Mimecast services, it is now great,” said Grant Hodgkinson, Product Director, Mimecast.

“Mimecast is the ultimate cloud companion if you are using Office 365 or looking to migrate. We bring a broad set of additional capabilities that are good for end-users who want email to be there 100% of the time and need to access their archive anywhere at any time. Plus we address, in a simple way, the serious compliance, governance, security and continuity concerns that have made some IT administrators resistant to the cloud and cautious about Office 365,” he added.

For more details, click here.

Cloud Cloud News Hosted Cloud Apps Innovation New Products News Technology

Developers Can Now Fork Their Own Applications With The Newly Introduced Heroku Fork

Heroku today introduced Heroku Fork, that let’s developers fork their own applications and the applications they’re collaborating on. The basic information about the new product is available in this write-up and a detailed version of the same is available in this article by the Heroku team.

The news comes a month after Heroku made Heroku Platform API available in Beta.

Heroku Fork lets developers create unique, running instances of existing applications from the command line. These instances are live and available on Heroku immediately so developers can change, scale and share them however they want.

Developers need to have the Heroku Toolbelt installed to use this feature. They can fork an existing application by running the following command:

$ heroku fork -a sourceapp targetapp

The said command copies the source app’s precompiled slug, copies the source app’s config vars (excluding add-on-specific environment variables), re-provisions the source app’s add-ons with the same plan, copies the source app’s Heroku Postgres data and scales the web process of the new app to one dyno, regardless of the number of web processes running on the source app.

We want to empower teams to work faster and smarter: test new features, carry out experiments, and evolve rapidly. We think heroku fork provides the foundation for these things and more.
– Heroku.

The developers using paid add-ons will be charged for their usage in the new app.

Some of the use cases of the Heroku Fork are:

Demonstrable Pull Requests: Using heroku fork, pull requests can be accompanied by the URL of a live fork of the app that demonstrates a real, interactive version of the new feature.

Quick Setup of Multiple Environments: heroku fork can be used to spin up new, homogeneous application environments for other stages of development. heroku fork also provides a simple way to spin up more ephemeral environments to play with, modify or dispose of, if developers
want additional environments outside of the standard development/staging/production workflow.

Migration to EU region: Heroku recently launched Heroku Europe and heroku fork can be used to migrate applications to the Europe region:

$ heroku fork -a sourceapp targetapp --region eu

After verifying add-on provisioning and config vars in the new application, developers can migrating any production data not stored in Heroku Postgres and adjust DNS settings.

“We want to empower teams to work faster and smarter: test new features, carry out experiments, and evolve rapidly. We think heroku fork provides the foundation for these things and more,” said Heroku in an official blog post.

For more information, click here.

Hosting Innovation Interviews New Products News Technology Tube Web Security

“DNSSEC Will Become a Standard Part of Any Offering Over time,” Dr. Burt Kaliski, Verisign

Change has been constant in almost every facet of life throughout the past, and technology industry is no exception.

There is no uniformity in technology; it keeps on developing, improving, re-inventing itself, and in the process also changing the way it diffuses across the society.

It’s always a pleasant sight to have people and organizations that support, facilitate and ensure adoption of these changes. If not for them, we’d still be stuck in the world of dial-up internet, huge computers, and rotary dial telephones. Heck, we wouldn’t even have been able to reach that world itself.

And Verisign is one such organization. Through its efforts to ensure operational deployment of DANE, DNNSEC , IPv6 and many more protocols/products that seek to replace the traditional systems in place today, Verisign strives to build a better and stronger Internet.

We recently had an opportunity to interact with Dr. Burt Kaliski Jr, Senior Vice President and Chief Technology Officer, Verisign at WHD. India 2013 and he talked at great length about some of Verisign’s such initiatives. Some highlights of our session with him are below, and a print version of the whole interaction follows it.

IPV6 is a complete breakthrough, because it has 4 times as many bits, and that’s an enormous exponential increase in the number of possible addresses. There is no foreseeable period in which IPv6 addresses would run out. In fact, IPv6 makes it possible to give out unique addresses for everything at every point in time.
– Dr. Burt Kaliski Jr, Senior Vice President and Chief Technology Officer, Verisign.

Dr. Burt Kaliski Jr, SVP and CTO, Verisign
Dr. Burt Kaliski Jr, SVP and CTO, Verisign

Q: Before we begin, please tell our readers about your journey from RSA Laboratories to Verisign.

A: RSA laboratories was the place where I started my career in security after getting a PhD. While I was there back in the startup days, that’s when Verisign spun out of RSA to offer certification services.

I stayed with RSA well into my career and eventually moved into EMC Corporation after it acquired RSA. But then two years ago, I took an opportunity to move back to Verisign, which I had been following all along. In a way, it was like returning back where I started.

Q: What according to you are some of the major flaws in the modus-operandi of X.509 – CA model currently in place that seriously jeopardize the Internet users’ security?

A: The X.509 certificate authority model has been around since the 1980s and it’s the basis for electronic commerce sites; we have been using it for a number of years. It’s a good model in many respects, but, as in a number of systems, there can be too much of a good thing. And in the case of the X.509 certificate authority model, there are too many certificate authorities, all of which , in many settings, are treated the same. That means that a compromise on any one of the certificate authorities could lead to an attack on the system. What we’ve looked at as a security industry , are the ways to mitigate that compromise, so that you can get all the benefits of X.509 – CA model , but with some checks and balances in place that can prevent attacks from occurring.

Q: What is DNS-based Authentication of Named Entities, and how does DANE protocol successfully deploy DNSSEC to thwart MitM cases that are rife in the CA model?

A: Let’s start with DNSSEC. The security extensions for DNS were developed to provide additional assurance above and beyond the relationship that the parties might have when they are exchanging the DNS information, and that additional assurance comes in the form of a digital signature. This means that the DNS, in addition to returning the IP address associated with a given domain name, will also return a digital signature on that information, so that a relying party can confirm that the correct information was presented, even if that relying party wasn’t directly interacting with DNS.

DANE, the DNS-based Authentication of Named Entities protocol, takes this step further and says, if we can get this additional assurance for IP addresses, why not get additional assurance for other information associated with a domain name. In particular, you can have this assurance provided as a check and balance for information that otherwise is prepared by certificate authorities.

So as I mentioned, there can be potential attacks because of too many certificate authorities. A counter measure to those attacks, is for the owner of a domain name to say exactly which certificate authority, the very one CA, it intends to work with, and then if there were any compromises on any of the other ones, those would not be able to undermine the security of the domain name owner.

Q: Since DANE needs DNS records to be signed with DNSSEC, isn’t DNSSEC validation a major issue that heavily limits DANE’s use?

A: Applications and services often will evolve in tandem. DNSSEC capabilities are built into nameservers starting at the root, moving through top level domains like .com and .net operated by Verisign, and then moving into the next levels. Some records are already signed and so they can be validated if a relying party requests it. But you don’t need to validate everything or sign everything in order to add security for a particular set of records. If there is some application that needs the extra assurances provided by DANE (establishing a secure connection with a web server for banking transactions or enabling secure email), that application can stand by itself. So you don’t need everyone to accept DNSSEC in order to have a greater security and the use of DANE within your own application.

Q: How do you see the future of DNSSEC in the Internet security space?

A: I think we will continue to rely on DNSSEC as a building block. It will become a standard part of any offering. As the new generations of nameservers, recursive nameservers, applications, relying parties and so on are developed, they’ll build a better foundation because the technique is available. So DNSSEC will gradually become a commonplace.

There will be certain applications that will drive its demand faster than others, and think those are the ones that will have the additional value from what it will effectively become – a global distributed directory of signed information.

Q: How can Web Hosting providers, ISPs, Hardware vendors and Software developers each play their part in supporting DNSSEC?

A: If you are a hosting provider, you want to differentiate your services by offering DNSSEC for some or perhaps all of your customers. That means as a hosting provider, you want a nameserver that has DNSSEC capabilities or you outsource to someone else that has those capabilities for you.

If you are an application developer preparing a browser, an operating system or a mobile client, then you want validation (of the DNS information that comes back either doing it locally or relying on a recursive nameserver that does it for you and presents confirmation that calculation is complete) to be an option in your implementation.

So each party has the options of putting these services in place. But the real key is to put them in place where they make a difference. If there is a particular application that benefits from this distributed global directory of signed information, that’s the place to put most of the emphasis at first and then you can pull the other parts along.

Q: Moving on, the recently published technical report by Verisign, titled “New gTLD Security and Stability Considerations” warns that addition of hundreds of new gTLDs over the next year could perhaps destabilize global operations of the DNS, along with other significant consequences. Can you highlight main areas of focus in the report and some potential problems/issues that you think need to be timely resolved?

A: Earlier in 2013, Verisign published a research report outlining some of the concerns that we have on security stability and reliability as new generic top level domains are introduced.

Now we have observed the operation in the gradual pace of growth for generic top level domains and the country code top level domains, but the addition of so many new gTLDs is unprecedented. It’s a huge multiplier of the use of the root servers with different kinds of usage patterns that may not have been anticipated previously.

We do commend ICANN for its commitment to ensuring security, stability and reliability of the root servers and the internet in general as the new gTLDs are introduced, which is why we have raised the concerns.

Some of the high points of these concerns: One is that the rapid pace of change for the root servers, by effectively adding an order of magnitude to the number of objects and perhaps even more to the amount of traffic, needs to be measured carefully. There is no one root server. There are in fact 13 different root servers by design with multiple independent operators. So to have a full picture of the impact, it’s important to have the right measurements in place. The reason that these measurements are important is that the root servers are not always used in the way you might expect them . In fact, we have seen that 10% of the traffic to the root servers is coming from generic top level domains that actually don’t exist. These requests are coming from throughout the internet to resolve things like .corp or .local, which are built-in to applications but are not generic top level domains.

So it’s important to understand the impact of this set of requests – which represents applications throughout the internet that assume that these gTLDs can be reserved for their own local use.

And that’s where the stability, security and reliability issues come in – If these applications are assuming that some generic top level domains have not been delegated, what happens when they are? How would we measure and see the impact? Could that compromise security? Could that cause systems to fail? That’s the area we ‘d like to have more study on.

Q: Do you personally think that new-gTLDs will have as significant impact on the domain industry as it is touted to be? Because new-gTLD launches of the past like Biz, Info, Travel, Mobi, etc. failed have to marginalize .COM’s dominant position.

A: The gTLD program which Verisign participates in a number of ways, is another way to give more choice to the users and the owners of the resources who’re looking for better ways to connect to each other, different ways of describing the servers that they’re present on the internet, different languages, different characters sets etc., because these are all that’ll make the internet easier to use and more accessible.

The objectives of the new gTLDs are very significant. I don’t know what can happen as these gTLDs progress or comment on any specific gTLD in particular, because in any area of innovation, industry learns over a period of time. But we do expect that the established domain names, net and .com in particular, will continue to be relied on for a long time to come.

Q: This one is regarding another one of Verisign’s initiatives. How serious is the IPv4 address shortage problem? Also, can you tell how IPv6 resolves the problems associated with IPv4?

A: IPv4 is a great example of unexpected success. When the internet first started, everything was so small that it was thought that 32 bits worth of address would certainly be enough for the stage of the experiment they were working on at that time. And it has been enough to take us till just recently, when the last block of IPv4 address was allocated.

Now, over the years, the internet community has found ways of using that same set of IPv4 addresses as effectively as possible with all kinds of sharing, reuse, mappings, translations etc. And that can continue, depending on what application you are trying to build, maybe for a few years or maybe even longer. But eventually, it becomes too difficult to keep putting all this patchwork in place on a set of addresses that has run out. You can imagine the same happening in other domains as well. If you run out of mobile phone numbers, you need to put in new area codes.

So, IPV6 is a complete breakthrough, because it has 4 times as many bits, and that’s an enormous exponential increase in the number of possible addresses. There is no foreseeable period in which IPv6 addresses would run out. In fact, IPv6 makes it possible to give out unique addresses for everything at every point in time. And the protocols and the parallel stacks of implementations are already being rolled out. Last year, there was an IPV6 day, where everyone who was participating enabled IPV6 so that you could reach their websites using the IPV6 protocol.

I think we will see co-existence for a period of time because the existing IPv4 systems are already working. But in new applications, especially in the mobile internet, we will drive the use of IPV6 and then pull all the rest along.

Q: To wrap up, what developments can we expect from Verisign labs in Q3 &Q4 of 2013?

A: Well, at Verisign labs, we are looking at the next generation of protocols and architectures for DNS and the way that it’s used. We have been active in promoting the DANE in DNSSEC for a period of time and I think that people can expect to see more of that.

We have also been looking closely at the security, stability and the impact of new gTLDs and we would likely have more to say on that too. In fact, Danny McPherson, the company’s Chief Security Officer, has started a blog series on that outlines many of the points that have some concern from our perspective and others as well.

We are also in the process of incubating some interesting new ideas that could be quite transformative so perhaps some of those could come out of the lab in Q3 and Q4 of this year also.

Hosting New Products News Web Hosting Website Development

FastWebHost Launches New Basekit Sitebuilder tool

Web hosting provider FastWebHost today announced that it now offers Basekit Sitebuilder tool that helps users create and launch websites in three easy steps  from within their browser.

The aforementioned three steps to create a website with FastWebHost’s Basekit Sitebuilder tool are:

  • Choose a website template: A template selector allows users to choose from around 200 + customizable templates.  A free 2 page website is available with all FastWebHost accounts.
  • Add features: Users can enhance the functionality of their websites with  additional features like  Google maps, eCommerce features, social media widgets etc.  They can also add text, images and videos onto their chosen template with a drag-and-drop tool.
  • Customization: All templates can be   personalized by changing color schemes, font styles, creative layouts and more without any coding.

Users don’t need to download or upload any files during the whole process, the entire website can be created from within their browser.

Every website comes with cPanel web hosting. For more details, click here.

Page 1 of 5
1 2 3 5