Are you moving your files to the cloud? Shifting from an in-house solution to an external cloud environment might feel daunting, but it’s an important move for most businesses as they grow and as their technological needs change and expand.
Let’s take a look at some common problems involving cloud migration – and how you can solve them.
Problem #1: Rushing the migration without taking enough time to plan
Moving to a cloud based file server isn’t something you want to rush. Unfortunately, many organizations make the mistake of hurrying their migration without taking the time to plan and create a proper strategy.
The last thing you want is to end up with unexpected downtime, or worse, the loss of important files or data.
Solution: Analyze your current infrastructure … and plan accordingly
As Daniel Hein points out in an article for SolutionsReview, migrating to the cloud can potentially take several months, depending on the size of your organization and the amount of data you need to move.
You’ll need to pay particular attention to assets that may need adjusting or even rebuilding completely once you migrate to the cloud.
Be realistic about the costs and timescales that your organization will face, too. It’s much better to be clear about these upfront than to find them spiraling out of control partway through the process.
Problem #2: Not training your employees adequately
When you deal with IT a lot, it’s all too easy to assume that others will be just as quick as you at picking up new technologies.
Some employees, though, may not find your new cloud-based systems at all intuitive to use. If you don’t provide enough training, you’re likely to face a drop in productivity (and even in job satisfaction). Plus, your IT team may be overwhelmed by support requests.
Solution: Allow time and resources for employee training
Make sure that your cloud migration strategic plan also includes the time and resources to train employees on the new systems. That could involve a mix of:
Hands-on live training where employees are shown what to do and have a chance to try it out in real-time, so they can easily ask questions if they’re confused.
Pre-recorded video demos or written documentation on how to use the new cloud-based system.
One-to-one training in a small company where specific employees will be using the new system a lot.
Problem #3: Not accounting for ongoing costs
When moving to a cloud-based solution, it’s not just about the upfront cost of servers or even the ongoing costs of bandwidth and your IT team’s time. You also want to take into account the other ongoing costs that you’re likely to face. As Sulakshana Iyer explains:
“Cloud server management includes ongoing operations such as industry compliance, security certificates, monitoring application performance, up-scaling servers, and more.”
Solution: Be clear about ongoing, not just upfront, costs of cloud migration
While your cloud-based systems may well be cheaper than your previous ones, you still need to be clear about the ongoing costs that will be faced – both in terms of direct money paid out and in employee time.
Make sure the company you’re using for your cloud-based server clearly lays out the costs for you and be sure to factor in the indirect costs as well. You may want to err on the side of overestimating how much staff time the migration will take: that way, you’re covered even if something doesn’t go as smoothly as you hoped.
Cloud-based servers are a great option for both big and small businesses. By ensuring that you consider and face up to potential problems upfront, you’ll pave the way for a smooth and easy migration.
Are you handling your client’s data, or recording transaction information? What about dealing with other systems or generating new data? Or, storing data from computers, phones, or IoT devices?
In the matter concerning the use of data and their structured storage, databases have become practically essential. Of the various ways in which companies can store data, establishing a relational database that runs on cloud computing platforms would appear to be among the most straightforward.
Relational databases offer a declarative method for identifying data that are put in tables and rows. They are built based on SQL (Structured Querying Language) that helps to extract and manipulate the data from associated tables in a database.
Relational databases are useful in handling highly structured data and offer support for ACID (Atomicity, Consistency, Isolation, and Durability) transactions. They also allow avoiding data duplication.
Current market trends of relational databases
Over the years, relational databases have become better, stronger, faster, and easier to work with. They have become the most widely accepted model for databases. A survey by StackOverflow’s developers found that 58.7% of organizations do prefer SQL or relational databases over other databases.
Also, the relational databases will account for over 80% of the total operational database market by 2022, according to IDC.
This confirms that more and more organizations will be using relational models in the coming years.
While it is still possible that companies can opt to build their own server database in the cloud, they can preferably choose to buy a relational database as a service, like ApsaraDB for RDS offered by Alibaba Cloud. What could be easier?
In this post, we are focusing on the best relational databases and comparing them to help you select the most suitable one for your next project.
Best relational databases – at a glance
Alibaba Cloud ApsaraDB for RDS
Amazon Relational Database Service
Google Cloud Spanner
IBM Db2 on Cloud
Microsoft Azure SQL Database
Alibaba ApsaraDB for RDS
Alibaba Group entered the relational database market in 2016. In the overall database market, Alibaba generated the third largest cloud-only database management system (DBMS) revenue among global players in 2018.
Alibaba Cloud ApsaraDB for RDS is an online database service that helps you focus on your core business by controlling the administrative tasks that are related to managing a database. It supports RDBMS engines, such as MySQL, SQL Server, PostgreSQL, Postgre Plus Advanced Server (PPAS), and MariaDB.
Below is the diagram of ApsaraDB for RDS
ApsaraDB for RDS manages complete database tasks such as monitoring, migration, backup, recovery, and other types of database functions. Further, it can also protect your network attacks or database threats.
Some of the main features of ApsaraDB for RDS include:
Databases are built in a high availability infrastructure for continuous database services.
Automated replication between primary and secondary instances, data backup, and log backup to assure data reliability.
It protects your data from being stolen, as well as address security vulnerabilities on time.
It guarantees the normal performance of database instances through regular maintenance and management.
The user can expand storage and memory capacity at any time.
The service is billed based on your actual resource usage via two methods: Subscription and Pay-As-You-Go. See the pricing section for more details.
Spanner is Google’s fully managed, relational database service designed to provide SQL, scalability, schema, automatic, and synchronous replication for high availability.
This relational database will look similar to those working with SQL, but with less downtime of a traditional database. Google Cloud Spanner provides features including automatic multi-site replication and failover, consistent reads, and global transactions.
Spanner can provide strong consistency and high-performance transactions globally with availability SLAs of 99.999%, even in multi-region clusters. It transforms the administration and management of the database and makes the app development more systematic.
Some of the main features of Google Cloud Spanner include:
It supports the SQL interface to read and write data.
It provides highly consistent reads with no interruption.
It lets you perform stale reads using the bounded or exact staleness types.
Replication is synchronous and highly consistent.
It secures your applications and data from fraudulent activity and spam.
It generates real-time insights into massive volumes of data in a more organized, secure, and cost-effective way.
IBM was the first-ever company to build a relational database, System R. It is leading in this market with a wide range of solutions, led by its longstanding – db2 platform.
IBM Db2 on Cloud is a fully managed, cloud database that provides strong performance and high-availability option with a 99.99 percent uptime SLA. Db2 on Cloud can independently scale and compute, and leverage security updates.
See diagram of IBM Db2 on Cloud
Db2 on Cloud supports various data connectivity methods and offers Oracle PL/SQL compatibility. It is deployable on both IBM’s own cloud and Amazon Web Services (AWS), and you pay for what you use.
Some of the main features of IBM Db2 on Cloud include:
It provides multi-cloud deployment capabilities.
There is a 99.99 percent uptime service level agreement. This high availability option enables users to update and scale operations with zero downtime deployment of applications running on Db2 Cloud.
It delivers deployment capabilities on an isolated network, accessible via a secure VPN.
It simplifies AI-based application development using integrations with IBM AI and Machine Learning tools.
It supports independent scaling of compute and storage. With compute, businesses can scale up when demand rises, and scale down when demand falls. With Storage, businesses can expand their storage when needs grow.
It offers disaster recovery (DR) capabilities.
It supports SSL connections and rolling security updates.
Azure SQL is a fully managed, scalable, and intelligent relational database service that offers the widest SQL Server engine compatibility.
Azure SQL is developed on the SQL Server with 99.99 percent availability. It supports a built-in intelligence feature that reads app patterns and adapts to boost performance, reliability, and data protection. Azure SQL can be a great choice for cloud applications as it allows you to process both relational as well as non-relational data structures.
Below is the diagram of Azure SQL database
Azure SQL provides high performance with multiple resource types, service tiers, and compute sizes. It delivers dynamic scalability without downtime, global availability, advanced security, and intelligent optimization.
Some of the main features of Azure SQL Database include:
It continuously learns app patterns and adapts through built-in intelligence feature to boost performance, reliability, and data protection.
You can scale as you need, with zero application downtime.
It has built-in monitoring and alerting capabilities to find performance insights on a real-time basis.
Its business continuity and scalability features provide automatic backup, disaster recovery, load balancing, high availability, and automatic recovery from the datacentre scale failure, without data loss.
It protects your data with encryption, limiting user access to appropriate data, authentication, and continuous monitoring and auditing.
Azure SQL Database will charge you per database depending on their pricing tier.
This relational database comparison aims to provide you with information about different database options. We hope you found what you were looking for. In case you are still unsure or have questions, please feel free to contact us and our experts will talk to you.
Brief comparison: Alibaba vs Amazon vs Google vs IBM vs Microsoft
It is a complex and challenging task for businesses to handle the integrity and functionality of their domain name system (DNS). DNS management without the right services or tools can impact the protocols, migration, performance, costs, and productivity.
In this digital transformation era where cloud has become the norm, you need to use highly available, global cloud DNS management services. The right service will provide you control over your DNS, ability to manage domains/subdomains, import/export domains, whenever you want. Before diving into finding the best DNS management service, let’s first learn exactly what DNS management is.
What is DNS management?
DNS is a key component of all the communications on the internet, that works as the phonebook of domain names and translates these to numeric IP addresses.
The DNS can include numerous record types, a ton of IP addresses, and several other things. The process of managing the DNS is called DNS management.
To manage the DNS properly and make sense of things in an easier manner, the enterprises need DNS management services. In this article, we have mentioned and compared the best DNS management services based on the cloud.
Best cloud DNS management providers: A quick comparison
Here is the detailed comparison of top cloud DNS providers:
1. Alibaba Cloud DNS
Alibaba Cloud offers a stable, secure, and faster cloud-based DNS management solution. It has a network of BGP (Border Gateway Protocol) anycast network with 19 nodes globally.
Whenever some updates are made or new DNS records are added, Alibaba Cloud DNS takes up to 10 seconds to propagate the changes to DNS servers. In terms of security, Alibaba Cloud DNS management service provides several features, including DNS security protection, load balancing, rate control, malicious traffic scrubbing, and auto-failover. These features help in maintaining high availability and stability.
Key features of Alibaba Cloud DNS management service:
Amazon Route 53 allows developers and enterprises to route the end-users to applications by converting domain names into IP addresses. Known for reliability, high availability, and scalability, the Amazon Route 53 is completely compliant with IPv6.
It can be used to link the user requests to the infrastructure hosted on AWS services, as well as to the infrastructure not running in AWS. For monitoring the health of applications and their endpoints, the AWS DNS management service can enable DNS health checks.
DNS management in AWS is easier as it comes with a simple visual editor. This allows businesses to manage global traffic for several types of routing, such as Geo DNS, Geoproximity, Latency-Based Routing, and Weighted Round Robin.
Key features of Amazon Route 53 DNS management service:
Resolver: To form conditional forwarding rules and DNS endpoints, and resolve custom names.
Latency-based routing: For routing the users to the region offering the lowest latency.
Traffic flow: For routing users to the best possible endpoint on the basis of latency, health, and geo-proximity.
Private DNS for Amazon VPC: Control the exposure of DNS data, while managing custom domain names for the internal AWS resources.
DNS failover: Keep away website outages by automatically routing end users to an alternate location.
Health checks: Check and monitor the health of the application, web servers, and resources.
Domain name registration: AWS DNS management service allows registration of domain names and transferring of existing ones to Route 53.
Integration with Amazon ELB: Route 53 provides integration with Elastic Load Balancing (ELB).
AWS Management Console: Route 53 is compatible with the AWS Management Console.
Microsoft Azure is one of the best cloud DNS providers that allows the management of DNS records with the same account that you are using for other Azure services. Billing and support contracts also remain the same. You can host the Azure DNS alongside other Azure applications.
Microsoft applies anycast networking to route the DNS queries to nearest name servers for boosted performance. In case any new DNS records are added, it takes only a few seconds for Azure DNS management service to update the name servers.
Furthermore, the Azure DNS management tool resolves the names in a VNet (Virtual Network) with no need for creating and managing customized DNS services.
Azure DNS doesn’t allow consumers to purchase domain names. It can be purchased only from third-party domain name registrars which can then be hosted in Azure DNS. DNS management in Azure is based on Azure Resource Manager to facilitate role-based access control, activity logs, and resource locking.
The role-based access control allows the owners to control the individuals in the organizations from performing specific actions. Activity logs are useful for monitoring the activities of users in the organization. Whereas, resource locking enables the locking of resources to prevent users from making changes to confidential resources.
Google Cloud DNS runs on the same infrastructure as Google. Known for scalability and low latency, Google’s DNS management service is an economical way to convert domain names into IP addresses. It allows businesses to create and manage DNS zones with an easy-to-use UI and command-line interface.
Google uses a network of anycast name servers to facilitate low-latency access and high availability. When it comes to scalability, the Cloud DNS scales automatically if there are a high number of zones and records. You can host the DNS servers both on-premises or with third-party DNS services.
You can also create managed zones, which help in adding, modifying and deleting DNS records. The permissions can be handled, and the modifications can be monitored easily.
Key features of Google Cloud DNS management service:
IBM Cloud provides domain name registration and management service with an easy-to-use interface and the ability to track activities. The DNS management in IBM Cloud can be managed easily using the IBM Cloud console. This console allows you to view, edit, or delete the DNS associated to your account.
Using IBM Cloud console, every aspect of the DNS can be managed. IBM Cloud DNS supports cross-platform functionality and allows you to extend DNS zones that are not on the IBM Cloud network. These DNS zones can be added as a single domain or multiple at a time. The supported DNS record types in IBM Cloud DNS include A, AAAA, CNAME, MX, SRV, and TXT.
Key features of IBM Cloud DNS:
Domain name registration
Check account activity
Registration renewal reminders
Domain dispute resolution policy
Best cloud DNS providers comparison: Alibaba Cloud v/s AWS v/s Azure v/s Google Cloud v/s IBM Cloud
Following is a quick yet comprehensive tabular comparison of top cloud DNS providers:
Amazon Route 53
Google Cloud DNS
No. of regions
Management of DNS
Alibaba Cloud console
AWS Management Console
• Google Cloud Platform Console
• gcloud command-line tool
• REST API
When talking about the best cloud DNS service providers, the names of Alibaba Cloud, AWS, Azure, Google Cloud, and IBM Cloud emerge among the top. Now, analyze your business requirements and have a deep look at the comparison above to find which DNS provider is the best one for you.
So, which cloud DNS service provider do you think will fit your business best?
Disclaimer: The information contained in this article is for general information purpose only. Price, product and feature information are subject to change. This information has been sourced from the websites and relevant resources available in the public domain of the named vendors on 6 February 2020. Daily Host News makes best endeavors to ensure that the information is accurate and up to date, however, it does not warrant or guarantee that anything written here is 100% accurate, timely, or relevant to the website visitors.
Wondering which public cloud storage providers are the best? We have got the answer.
Mobile devices, the internet, social media platforms, communication channels, and online shopping produce colossal amounts of data each day. The volume of data that’s generated and is being transferred is staggering and is accelerating at an unprecedented rate.
Gartner predicts that more than half of major new business processes and systems will incorporate some element of the internet of things (IoT) in their organization by 2020. And with that, the amount of data generated, transferred to be analyzed by Big Data applications and stored is bound to grow at a mind-boggling rate.
Due to this upsurge in big data and increasedcloud adoption across organizations, the demand for scalable storage solutions that are capable of handling and archiving increased digital content is increasing. Hence, the cloud storage solutions’ market is seeing an upward trend.
Cloud storage market growth
The global cloud storage market is expected to grow at a CAGR of 21.9% and reach $207.05 billion by 2026, according to recent research by Research and Markets.
More and more businesses are moving their critical IT infrastructure and data to the cloud, as storing data on the cloud allows them to store files online and access them from any location via the internet. Businesses get three main cloud storage solutions to select from:
Public cloud storage service – suitable for unstructured data
Private cloud storage service – for more control over data and
Hybrid cloud storage service – for increased flexibility
Public cloud storage is the most widely adopted storage solution by the enterprises as both hardware and network are maintained by the cloud storage service provider, hence no maintenance worries for the enterprises.
There are hundreds of different cloud storage providers in the industry offering different solutions, but choosing the best cloud storage platform to store your data, depends on your business needs.
Read on to take an informed decision.
How to choose the best cloud storage service for my business?
Determine how much space you need and what will be the frequency of your data storage i.e. daily, weekly, monthly etc.
Determine characteristics you need in your cloud storage – scalability, control, bandwidth, multi-tenancy, latency, data availability and more.
Determine the type of storage architecture your applications/instances require i.e. block storage or file storage or object storage.
There are many leading players in the global cloud storage service market, but the four leading ones are Amazon Web Services (AWS), Microsoft, Google and IBM. All of them have a variety of storage options. Each service has its own pros and cons depending on the purpose of using it and on the specific use case.
Here is the detailed and quick cloud storage providers comparison for businesses of all sizes:
Object-based storage, as it is called, is a hierarchy free method of storing data as discrete objects, where each object includes a variable amount of metadata, the data itself and a unique identifying name that an application uses to retrieve the data.
Object storage is ideal for unstructured data. It offers metadata characteristics and is massively scalable.
Cloud storage providers offer three tiers of object storage, classified by how frequently data is accessed: hot for instantaneously accessible data, cool for infrequently accessed data and cold for archival and rarely accessed data.
AWSoffers a different range of storage classes depending upon the use cases. S3is the primary object storage platform of AWS. It offers S3 Standard-Infrequent Access for cool storage and Glacier for cold storage.
Amazon S3 Standard– It is the storage option for frequently accessed data and is ideal for use cases such as cloud applications, dynamic websites, content distribution, gaming, and data analytics. It delivers low latency and high throughput.
Amazon S3 Standard – Infrequent Access (Amazon S3 Standard – IA)–Storage option for less frequently accessed data like long-term backups and disaster recovery.
Amazon Glacier– This storage system is highly durable and optimized for infrequently accessed data, or “cold data like end-of-lifecycle, compliance, or regulatory backups”. Here data is stored in archives for long term storage and is encrypted and immutable.
2. Azure Object Storage
Azure Blobs is Microsoft’s object storage solutionfor the cloud. Blob storage is optimized for storing any type of unstructured (text or binary) data – images, videos, audio, documents and more. Azure storage features mutability, flexibility and data integrity of superior quality.
Blob storage is ideal for storing files for distributed access, serving images or documents directly to a browser, streaming video and audio, writing to log files, storing data for backup and restore, disaster recovery, and archiving, for analysis by an on-premises or Azure-hosted service. It is one of the most reliable cloud storage providers for enterprises.
Azure offers different storage tiers:
Hot access tier: For data that is expected to be or in active use and staged for processing and eventual migration to the Cool storage tier.
Cool access tier: This tier is intended for data that will remain in the Cool tier for at least 30 days. Short-term backup and disaster recovery datasets, older media content expected to be available immediately when accessed and large data sets.
Archive access tier: This tier is intended for data that will remain in the Archive tier for at least 180 days and has capacity to tolerate several hours of retrieval latency.
Note: The Archive storage tier is only available at the blob level and not at the storage account level. Azure also offers Premium tier, ideal for workloads that require fast and consistent response times.
3. Google Cloud Storage
Google Cloud Storage (GCS) offers unified object storage for any workload. It offers four classes under high-performance object storage and backup, and archival storage solutions. All four classes offer low latency and high durability.
High-performance storage (Hot): GCS offers multi-regional and regional storage for high-frequency access data.
Multi-Regional storage enables storing of data that is frequently accessed around the world, such as serving website content, streaming videos, or gaming and mobile applications.
Regional storage enables frequent access to data in the same region of Google Cloud DataProc or Google Compute Engine instances, like for data analytics.
Archival Cloud storage (Cool & Cold): It offers Nearline and Coldline storage for low and lowest frequency access data.
Nearline storage: Cool storage i.e. for the data that a business expects to access less than a month, but multiple times throughout the year. Ideal for back-up and serving long-tail multimedia content.
Coldline for cold storage i.e. for data that business expects to access less than once a year. Best for disaster recovery, or data that is archived.
4. IBM Cloud Object Storage
IBM cloud offers flexible and scalable cloud storage with policy-based archive capability for unstructured data. This resilient cloud storage service is designed for data archiving i.e. for long term retention of infrequently accessed data, for analytics and backup, and web and mobile applications.
IBM offers four storage class tiers that come integrated with Asperaâ high-speed data transfer option that enables easy data transfer to and from Cloud Object Storage, and query-in-place functionality, making data analytics work for you.
IBM Cloud Object Storage class tiers:
Standard Storage for active workloads requiring high performance and low latency, and data that needs multiple and frequent access within a month. Usage scenario includes active content repositories, web content and mobile streaming, DevOps, analytics, and collaboration.
Vault Storage for less active workloads requiring immediate, real-time access but infrequently, once a month or even less. Backup and digital asset retention are the common use cases for Vault Storage.
Cold Vault for cold workloads, where data requires immediate, real-time access on demand but is primarily archived i.e. accessed a few times a year. Long-term backup, large data set preservation like scientific data, or older media content are the common use cases.
The Flex Storage class tier is used for dynamic workloads (mix of hot and cold workloads) data based on access patterns. Common use cases include cloud-native analytics and cognitive workloads and user-generated apps.
File-based storage, stores data in a hierarchical structure, and saves data in files and folders. There are many workloads or applications that rely on shared file systems i.e. they need a file system and access to shared files.
Cloud file storage is the service which enables data storage in the cloud and has an architecture based on common file-level protocols like NFS (Network File System) and SMB (Server Message Block). It provides shared data access to servers and applications through shared file systems.
Cloud file storage is considered ideal for unstructured and semi-structured data like spreadsheets, presentations and other file-based data and workloads like large content repositories, media stores, development environments and user home directories.
File Storage – Overview
Azure File Storage
Cloud file store
IBM File storage
Petabytes of data
2 PB for US and Europe, 500 TB per storage account for all other regions including UK Max size of file shares= 5 TiB
63.9 TB per share
Storage (file share) in capacity up to 12 TB
50 MB/s per TB of storage (can burst to 100 MB/s)
60 MB/s per file share
100 MB/s to 700MB/s per TB
Up to 48 IOPS
Multiple AZs in the region
LRS or GRS or ZRS
Data at rest Encryption
EFS-to-EFS Backup Solution
Backups, snapshots, and instance failover aren’t available for Cloud Filestore
Elastic File System (EFS) is the scalable file storage solution for AWS cloud. Amazon EFS is fully managed, easy to set up and has elastic storage capacity – your applications get on-demand storage, whenever they need it. It is built to provide low latency, throughput, IOPS – all three needed for a broad range of workloads.
It provides file locking and strong data consistency and supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol, i.e. integrates well with all the applications and tools of today.
With Amazon EFS, you get scalable file storage for use with Amazon EC2 and redundant data storage capacity across multiple availability zones. EFS also provides encryption support for file systems both in transit and at rest.
Elastic File Storage usage scenarios
Its general-purpose performance mode is ideal for latency-sensitive use cases, like web serving environments, content management systems, home directories, and general file serving.
Amazon EFS also supports developer tools, enterprise applications, container storage, media and entertainment processing workflows, database backups, and big data analytics workloads.
2. Azure File Storage
Azure Filesis the Microsoft’s file storage service which offers fully managed file shares in the cloud. These shares are accessible from anywhere via the industry-standard Server Message Block 3.0 (SMB) protocol.
Azure File Storage is cross-platform and application compatible. It allows applications to mount file shares from anywhere, worldwide both from on premises deployments- Windows, Linux, and macOS or cloud.
Azure File Storage enables modern applications’ development via REST API protocol. You get hybrid flexibility with Azure File Sync which allows caching and synchronization of Azure File shares on Windows Servers for regional access.
Azure File Storage usage scenarios
Azure Files is designed to support a variety of different use cases-replace or supplement on-premises file servers, “lift and shift” applications to the cloud, simplify new cloud development projects like logs writing by cloud applications, shared application settings access and more.
3. Google File Storage
Cloud Filestore is Google’s managed file storage service for those applications that require a shared file system and a file system interface.
Google offers a fully managed network-attached storage (NAS) service in the cloud.
It offers low latency, high throughput and high IOPS for file-based workloads, without affecting performance. GCP Filestore gives its users the freedom to tune their filesystem for a particular workload and mount Filestore file shares on Compute Engine VMs.
Google Cloud Filestore is integrated with Google Kubernetes Engine and with the rest of the Google Cloud portfolio.
Cloud Filestore is available as a storage option in the Google Cloud Platform (GCP) console.
Google file storage usage scenarios
Home directories, rendering workflows, applications migrations, web content management, media processing.
Service is in beta phase as of now.
Google Cloud Storage FUSE- an overview
FUSEis an open-source tool of Google which helps in the mounting of Cloud Storage buckets as file systems on Linux or macOS systems. With it, you can upload and download Cloud Storage objects vis standard file system semantics.
Despite having a file system interface, Cloud Storage FUSE is not like an NFS or CIFS file system on the backend.
4. IBM File Storage
IBM offers flashbacked NFS based file storage which is fast and durable. IBM Cloud File Storage offers endurance tiers for general-purpose workloads. It offers flexible IOPs customization i.e. its users can scale storage capacity as per workload demands and control total IOPs per storage volume.
There is a provision to create file shares in granular increments (from 1000 GB to 12000 GB) and also performance provisioning. IBM offers per-gigabyte pricing tiers up to 48k IOPS for their file storage with features including flash-backed storage, snapshots and replication, encryption for data at rest, volume duplication, expandable volumes and adjustable IOPS.
(These features are limited to the US, EU, Australia, Canada, Latin America and Asia Pacific regions)
Block storage is the data storage used in the storage-area network (SAN) environments where data is stored in volumes, called blocks. Each volume or block acts as an individual hard drive, configured by the server administrator.
Database storage, File systems, RAID arrays are the common use cases for block storage.
Each of the providers breaks their block storage offerings into two categories: traditional magnetic spinning hard-drive disks, or newer solid-state disks (SSD), which are generally more expensive but have better performance.
1. AWS Block Storage-EBS
Amazon web services EBS(Elastic block storage service) is the block-level storage for the long term persistent and quickly accessible data and to be used with EC2 instances. These Elastic Block Store volumes can be integrated with any instance that is running and is in the same Availability Zone.
AWS EBS offers two types of volumes: SSD – backed storage (next-generation high-performance drives) for transactional workloads and HDD – backed storage (traditional magnetic) for throughput intensive workloads.
Throughput Optimized HDD designed for frequently accessed, throughput intensive workloads, Big data (Hadoop/HDFS ecosystem and Amazon EMR clusters), data warehousing applications (Vertica and Teradata), log and stream processing (Kafka and Splunk).
Cold HDD designed for less frequently accessed workloads, colder data requiring fewer scans per day.
General Purpose SSD for a wide variety of transactional workloads, boot volumes, low-latency interactive apps, dev & test.
Provisioned IOPS SSD designed for latency-sensitive transactional workloads, I/O-intensive NoSQL and relational databases like Oracle, Microsoft SQL Server, PostgreSQL and MySQL or Cassandra and MongoDB.
2. Azure Block Storage
Azure Managed Disks is the Block storage offering of Microsoft Azure. Azure creates and manages disk of your choice for Azure IaaS VMs. It manages storage accounts used for VM disks.
Azure Block storage offers four types of Managed Disks:
Ultra SSD Managed Disks for most demanding workloads and extremely scalable performance, for data-driven applications, such as SAP HANA, SQL Server, transaction-heavy workloads, complex analytical modelling, gaming, rendering and low queue-depth databases.
Premium SSD Managed Disks for production and performance-sensitive workloads, such as SQL Server, Oracle databases, Microsoft Dynamics, Microsoft Exchange Server, MySQL, Cassandra, MongoDB and SAP Business Suite.
Standard SSD Managed Disks for cost-effective and consistent performance. Used for web servers, low-IOPS application servers, lightly used enterprise applications and dev/test scenarios.
Standard HDD Managed Disks for VMs running latency-insensitive workloads, for backup and archiving applications, for noncritical workloads and when production-level performance is not required.
3. Google Block Storage
Google Persistent Disk is the block storage for Google Cloud. It also offers both SSD and HDD storage for high throughput workloads and latency-sensitive workloads. This storage can be integrated into instances that are running in either Google Compute Engine or Google Kubernetes Engine.
Its offerings include:
Zonal persistent disk and Zonal SSD persistent disk.
Regional block storage: Regional persistent disk and regional SSD persistent disk.
Local SSD: High performance transient local block-storage.
Cloud storage buckets: Affordable object storage.
4. IBM Block Storage
IBM Cloud offers iSCSI-based persistent Block Storage. This flash-backed storage is deployable and customizable from 25 GB to 12,000 GB capacity up to 48,000 IOPS. It can be provisioned and managed independently of compute instances.
IBM Block storage offers two IOPS provisioning options:
Endurance: It is designed to support various application needs. It offers pre-defined performance levels and features like replication and snapshots. There are four IOPS performance tiers for different application requirements:
25 IOPS per GB for workloads with low I/O intensity.
2 IOPS per GB for most general-purpose usage.
4 IOPS per GB for higher-intensity workloads.
10 IOPS per GB for the most demanding workloads.
Performance: This type of block storage is designed to support high I/O applications. It offers various IOPS rates (100 – 48,000) that can be provisioned with storage sizes ranging from 20 GB to 12 TB.
Block /Disk Storage – Overview
Throughput Optimized HDD
General Purpose SSD (gp2)
Provisioned IOPS SSD
IOPS/GB for SSD
GP2 SSD=3 to 10,000
Provisioned = 50 to 32
Premium=80, 000 per second
Standard SSD= up to 500 to 2000 per disk
Ultra SSD= 100 to 16,000 per second
Endurance tiers= .25 to 10 IOPS available up to 48k, with 12-TB Endurance volume
99.99% #based on the SLA of the underlying storage used and virtual machine to which it is attached.
Disclaimer: The information contained in this article is for general information purpose only. Product features are subject to change. This information has been sourced from the websites and relevant resources available in the public domain of the named vendors as on 20th Feb, 2020. Daily Host News makes best endeavors to ensure that the information is accurate and up to date, however, it does not warrant or guarantee that anything written here is 100% accurate, timely, or relevant to the website visitors.
“With over 560 million internet users, India is the second-largest online market in the world. It is estimated that by 2023, there would be over 650 million internet users in the country.” Statista *
Indian population is rapidly going digital. The enormous number of internet users using active mobile data via connected devices is expected to grow exponentially in the coming years. Continued internet penetration is furthermore expected with the Government of India supporting public investment in large scale digitalization with its ‘Digital India‘ scheme and with the transition of telecommunications providers (telcos) to high-speed 4G LTE and the soon to come 5G wireless technologies. This presents a huge demand for the data center infrastructure.
Booming cloud adoption is further driving the growth in the Indian data center market.
“The overall India public cloud services market is likely to touch $7.4 billion by 2024 growing at a CAGR of 22.2% for 2020-24,” as per IDC.
Availability of needed real estate and competitive governmental policies are helping global players like Alibaba Cloud, Microsoft, Google and Amazon open data centers on the Indian grounds. The local players, having an advantage over global players in terms of knowing the market conditions well, are also leaving no stones unturned with new launches, innovative offerings and solutions on advanced technologies. Yotta launched Asia’s Largest Uptime Institute certified Tier IV Datacenter in Navi Mumbai in July 2020; Web Werks launched its fourth data center in Pune in August 2020; NxtGen is providing transformational services with DevCloud, Machine Learning (ML) and Artificial Intelligence (AI), ESDS providing smart city solutions and AI services, and a lot more.
Data center is a critical component for any enterprise for hosting their crucial business data and applications. Investing in a top-notch one is the deciding factor that can make all the difference between its success or failure. It’s equally important, if not more than a bank, for secure and successful working of an organization in today’s borderless world. Hence, we have summarized a list of top ten players in the data center space in India.
Download spreadsheet to see all features in detail.
Largest data center solution providers in India
Please note that it is not a comprehensive list of either data center providers or their features and product offerings. Also, the data center service providers listed below have been arranged in alphabetical order and do not represent any ranking as such.
CtrlS is an ANSI/ITA-942 certified Rated 4 Datacenter and managed services provider headquartered in India. CtrlS operates one million square feet of data center space across seven state-of-the-art facilities, including Hyderabad, Mumbai, Noida, and Bangalore. As per CtrlS, it is serving 60 of the Fortune 500 global multinationals.
Mumbai data center facility of CtrlS is LEED Platinum certified v4 O+M data center by United States Green Building Council (USGBC). Its Mumbai DC2 is a Rated-4 facility. It is covered by solar panels generating 1 MW of power. Its Noida facility is 100% quake proof and pollution free data center facility.
CtrlS has started working on its plan of expanding its footprint by 5 million square feet. It has acquired the land for constructing 2 million square feet hyperscale data center park in Navi Mumbai; 2 million square feet hyperscale data center park in Hyderabad; and plans are in wings for a 1 million square feet facility in Chennai.
Founded in 2005, ESDS is a leading managed data center service and auto-scalable cloud solution provider. ESDS is working steadfastly towards establishing a huge customer base.
It provides managed data center services, managed cloud solutions, virtualization, and disaster recovery hosting, backed with technical support.
ESDS has its presence in the following industry verticals – Banking & Finance, Manufacturing, Education, Energy & Utilities, Healthcare, eCommerce, Agriculture, IT, Entertainment & Media, Telecom, Government, and Travel & Tourism.
GPX develops and operates private, carrier-neutral, state-of-the-art Tier 4 data centers in emerging, but fast-growing commercial markets within the MENA and South Asia markets. GPX’s data centers are thriving carrier-neutral internet ecosystems, and home to the largest carriers, cloud service providers, content providers and internet providers.
GPX offers secure and highly reliable carrier-neutral data centers to both -domestic and international clients looking to colocate their crucial business infrastructure.
Its first data center was in Cairo in 2007. It opened its second Egyptian data center in early 2016, a 3000 M², Tier 4 facility located in New Cairo, Egypt.
GPX opened the first Tier 4 facility in South Asia in June 2012. It was a 3000 M² data center in Mumbai. Due to rising demand for high-quality facilities and service levels in India, GPX has announced its second Mumbai data center, GPX Mumbai 2 – a 6000 M², Tier 4 facility with 16 MW total power that will be available in Q2 2019.
GPX’s customers include Telcos, Cloud Service Providers, Internet Service Providers, CDNs, e-businesses and enterprise clients.
Netmagic, a wholly-owned subsidiary of NTT Communications, is a leading managed hosting and multi-cloud hybrid it solution provider with 9 carrier-neutral, state-of-the-art hyperscale and high-density data centers. It serves more than 2000 enterprises globally.
Netmagic, headquartered in Mumbai, also provides Remote Infrastructure Management (RIM) services to various enterprise customers globally including NTT Communication’s customers across Americas, Europe and Asia-Pacific region.
Netmagic was the first in India to launch services including cloud computing, managed security, Disaster Recovery-as-a-Service (DRaaS) and software-defined storage.
NxtGen enables its customers to build their digital business without investing and managing complex IT infrastructure, by leveraging its hyper-converged infrastructure.
Headquartered in Singapore, NxtGen, is an emerging leader providing completely managed datacenter and cloud services across India and Singapore.
NxtGen deploys and offers IT infrastructure services from both or a combination of on-premise resources and its own facilities – Infinite DatacenterTM, empowering its customers to adopt the latest hybrid computing model.
Nxtra Data Limited was formed to run Bharti’s business of Data Center Managed Services.
Nxtra now manages 10 Tier III and above and ISO 27001 certified data centres at Manesar, Noida, Chennai, Mumbai, Bangalore, Bubhaneshwar and Pune. The total facilities provide an approx. 200,000 sq. ft. of floor space.
Nxtra offers an integrated portfolio of data center managed services including co-location, managed hosting, managed services, managed security, managed back-up & storage, virtual compute and cloud along with both domestic and international network connectivity.
Sify is a leading integrated ICT Solutions and Services organization in India. It offers a wide range of solutions and products that are delivered over a common telecom data network infrastructure reaching over 1550 cities and towns in India.
Sify’s telecom network presently connects 45 data centers across India, including Sify’s 6 concurrently maintainable data centers across the cities of Delhi, Mumbai, Chennai and Bengaluru.
In 1998, Sify was the first Indian ISP that helped millions experience the internet for the first time on its network. It was the pioneer of Internet café, data and voice services for international call centers.
Today it has also expanded to the United States, with headquarters in California’s Silicon Valley. It has over 8500 enterprise customers.
Tata Communications Limited, with its subsidiaries (Tata Communications), is a leading global provider of A New World of Communications™. Tata Communications utilizes its advanced solutions capabilities and domain expertise across its global network for delivering managed solutions to multi-national companies and communications service providers.
Its global network includes an advanced and largest submarine cable network and a Tier-1 IP network with connectivity to over 240 countries and territories across 400 PoPs, as well as approximately 1 million square feet of data centre and colocation space across the globe.
Tata Communications’ reach in the emerging markets includes leadership in Indian enterprise data services and in global international voice communications.
Web Werks Data Centers, located in 3 countries with over six geographically dispersed data centers, have been among the leaders in India for past two decades. They offer reliable hosting services on dedicated servers, cloud, co-location, virtualization and disaster recovery along with 24×7 support and 99.995% up-time guarantee.
Web Werks Data Centers are Carbon Neutral contributing towards Global Go-Green concepts. It is also a SAP certified provider of infrastructure, hosting and cloud operations services.
Web Werks in India is the first Asian data center to hold OIX-2 and host an OIX-1 IXP Mumbai-IX. They also fulfill all the requirement for being a full OpenIX supporter. They are cloud empaneled by Ministry of Electronics and Information Technology, Government of India (MeitY).
Their client list includes Microsoft, Google, Godrej, Canon, TATA, Netflix, Facebook, Akamai, and more. They also have Government sector customers like Mumbai Metro Rail Corporation Limited, Maharashtra Knowledge Corporation, Nabard, Maharashtra Pollution Control Board, SIDBI and more.
Powered by the Hiranandani Group, Yotta design, build and operate infinitely scalable Data Center Parks. Yotta’s 50+ Acres of Data Center Parks will offer 11 Data Center buildings with options ranging from a single rack to an entire building, or even customized DC, supported with a wide range of managed services.
Yotta has a highly experienced (150+ man years) and certified Data Center design team. They claim to have some of the best minds in Electrical, Mechanical, HVAC, Automation, Fire fighting and Physical Security working with them.
Below is a list of most important products and features of data center service providers in India.
Disclaimer: The information contained in this article is for general information purpose only. Price and product information are subject to change. This information has been sourced from the websites and relevant resources available in the public domain of the named vendors as on 22 April, 2020. Daily Host News makes best endeavors to ensure that the information is accurate and up to date, however, it does not warrant or guarantee that anything written here is 100% accurate, timely, or relevant to the website visitors.
Last Update: The article has been updated on 4th January 2021.
As more organizations adopt cloud to leverage advantages like better scalability, more efficiency, and faster deployments, the cybersecurity pros remain concerned about security of data, systems and services. The cybersecurity teams are looking for new strategies as traditional security tools don’t fit for cloud environments.
According to the Cloud Security Spotlight report, 90% of cybersecurity professionals are concerned about cloud security, up 11% from a year before.
Alert Logic and Cybersecurity Insiders surveyed 400,000-member Information Security Community on LinkedIn to explore how organizations are responding to cloud security challenges, what are their biggest cloud security challenges, etc.
Top cloud security risks and challenges for businesses
Here are the biggest risks and challenges that concern the businesses when it comes to cloud security:
1. Cloud data loss and leakage
18% of the respondents indicated at least one security incident in last 12 months, representing a significant rise in one year.
Protecting cloud against data loss and leakage (67%) is the biggest concern for cybersecurity pros, followed by threats to data privacy (61%), and breaches of confidentiality (53%).
As per the report, the top four cloud security challenges included visibility into infrastructure security (43%), compliance (38%), setting security policies (35%), and security not keeping up with the pace of change in applications (35%)
3. Misconfigured cloud
Misconfiguration of cloud platform (62%) is the biggest threat to cloud security, followed by unauthorized access using employee credentials (55%), insecure interfaces or APIs (50%), and hacking of accounts, services or traffic (47%).
4. Security risk: Cloud vs On-premises
Almost half of the respondents said thatpublic clouds are at higher risk to cyberattacks as compared to traditional on-premises environments.
On the other hand, only 17% indicated that public clouds are less risky to security breaches than the on-premises environments.
5. Traditional security solutions don’t work for cloud
The organizations need to understand that the same traditional network security tools are not going to work when adopting cloud and hosting applications there.
Majority of organizations (84%) believed that traditional security services and tools either have limited functionality or don’t even work in cloud environments.
On the other hand, only 16% respondents believed that traditional security tools can be used to manage cloud security.
6. People and processes
The reportfound that the biggest barriers to cloud-based security solutions are people and processes, rather than the technology.
Respondents cited staff expertise and training (56%), data privacy concerns (41%), and lack of integration with on-premises technology (37%) as the top roadblocks to cloud-based security adoption.
Data and network encryption: Most effective security technologies
Respondents said that data encryption (64%) and network encryption (54%) technologies are that most effective security technologies, followed by security information and event management (52%).
More than half of the organizations believed that trained cloud security professionals can also help in securing the cloud.
Advantages of cloud-based security solutions
According to report, faster time to deployment (47%) and cost savings (47%) are the biggest advantages of cloud-based security solutions.
The other advantages of these solutions included secure access to apps from anywhere, reduced efforts to patch and upgrade software, better compliance, and better insights into user activity.
The popularity of cloud computing has increased over the last few years due to the explosive use of the internet. From competitive tech companies to start-ups, everyone wants to get into this space. According to the research, about 94% of enterprises are using at least one cloud service today.
As the enterprises shift their infrastructure on the cloud, the need for cloud load balancing solutions arise to handle traffic spikes. They can rapidly autoscale in response to the level of demand. They play a crucial role in balancing the system performance by evenly distributing the dynamic workload over multiple servers.
Leading cloud providers like Alibaba Cloud, Amazon Web Services (AWS), Azure, Google Cloud and IBM lead the cloud load balancer market. If you are wondering which solution you should opt for, here we list the major cloud load balancers and their features.
Alibaba’s Cloud Server Load Balancer (SLB) redirects the incoming traffic among various instances to balance and improve the service capabilities of applications. It can process up to millions of requests all at the same time and quickly meet the requirements during large demands. This avoids service outrages.
It checks the service availability of ECS instances by performing health checks. In the case of unhealthy instances, it automatically removes them to avoid a single point of failure. You can also reduce the frequency of health check by increasing the interval time or changing a layer-7 health check to layer-4 health check based on the service condition.
SLB also provides URL-based routing that allows users to redirect the incoming traffic to the backend server, based on URLs. This lets you configure the SLB across different zones of the region, so even if communication to one zone is interrupted, SLB automatically directs the traffic to zone 2 that is working normally.
Another important feature is Cross Region Disaster Tolerance through Global Traffic Manager (GTM), where we can configure SLB instances in different regions, and add ECS instances in various zones of the regions along with DNS service. The DNS service can resolve domain names, add IP addresses of different regions to different address pools of the Server Load Balancer and perform health checks. So, if a region becomes unavailable, this will automatically stop the domain name resolution for that unavailable region.
The price is calculated on the length of time of load balancer rentals and network traffic.
Amazon’s Elastic Load Balancing (ELB) can be used to distribute traffic across multiple EC2 instances. The service is elastic (i.e. changeable) and fully managed which means that it can automatically scale to meet demand.
There are three types of load balancers available in AWS.
Classic Load Balancer (CLB) operates on both the request and connection levels for Layer 4 (TCP/IP) and Layer 7 (HTTP) routing. It is best for EC2 Classic instances.
Application Load Balancer (ALB) works at the request level only. It is designed to support the workloads of modern applications such as containerized applications, HTTP/2 traffic, and web sockets.
Network Load Balancer (NLB) operates at the fourth layer of the (OSI) Open Systems Interconnection model. It is capable to handle millions of requests per second.
The pricing is based on the number of deployed load balancers and the data processed per hour.
AWS also provides third-party load balancing tools like Kemp LoadMaster and Barracuda Load Balancer in its Marketplace for more control. There is DNS based load balancing also which is offered through Route 53 which provides various routing and load balancing services. Route 53 is also useful in medical checks to send notifications through Cloud Watch.
There are three types of load balancers in Azure: Azure Load Balancer, Internal Load Balancer (ILB), and Traffic Manager. The various load balancers ensure that the traffic is sent to healthy nodes.
Microsoft’s Azure Load Balancer offers a higher level scale with layer 4 load balancing across multiple VMs (virtual machines).
Internal Load Balancer (ILB) has an internal-facing Virtual IP. Meaning, users can apply an internal load balancing for virtual machines (VM) that are connected only to an internal Azure cloud service or a virtual network.
Traffic Manager is an internet-facing solution that balances the traffic loads at various endpoints using a policy engine as well as a set of DNS queries. It can route traffic to any region’s service and even to non-Azure endpoints.
The pricing depends on the number of DNS queries received.
Azure also provides health check features that can be achieved through periodically polling an HTTP endpoint. It also provides third-party load balancing tools that are available in the Azure Marketplace.
The Google Cloud Load Balancer (GCLB) provides server-side load balancing to distribute incoming traffic to multiple virtual machine instances. It allows users to direct applications across any region and scale compute with very little configuration. It can load 0-1 million requests per second with no pre-warming.
There are three deployment types of load balancing services in Google: Global, Network and Internal. Global Load Balancing supports HTTP(S) traffic for modern web-based applications. Traffic is distributed to the region that is closest to the calling user, provided the region has available capacity.
Network Load Balancing directs traffic across virtual machine (VM) instances in the same region in a VPC network. Any TCP and UDP traffic can be load balanced on the basis of source, destination port, and protocol so that the traffic from the same connection reaches the same server.
Internal Load Balancing is a regional load balancer that distributes the internal traffic across a set of back-end instances without requiring a public IP address.
The health check feature is set up within the Compute Engine of the UI. You can send health-related alerts and notifications on instance groups without the load balancer itself.
The pricing of GCLB is based on the amount of data processed.
The IBM Cloud Load Balancer (ICLB) distributes the load balance of traffic across multiple application server instances to improve uptime and scaling of application with minimum disruption.
In case of interruption to an entire zone, the applications work without any problem as the load balancer instances are divided across various zones of the same region. This enhances the security and availability of any application in the IBM Cloud.
IBM supports Multi-Zone Region (MZR) which means the load balancer nodes are instantiated in two different data centers. In case of an interruption to the data center communication, the load balancer will still continue to work, as the other node is a part of a different data center.
The load balancer allows medical checks periodically to check the health of the back-end ports and distribute the traffic accordingly. The health of the ports is continuously monitored until it successfully passes two consecutive health check attempts, Layer-4 for TCP ports and Layer-7 for HTTP port.
The pricing of ICLB is based on service usage hours, data processed and outbound public bandwidth depending on the geographic region.
If you have anything to add to the list, let us know in the comment section.
If you are looking to migrate your database to the cloud, it is very crucial that you use the right database migration service for it. With numerous advanced database migration service providers available in the market today, you don’t have to compromise with the traditional tools and outdated offerings.
What is database migration?
Database migration is simply the migration of your apps, data, and other workloads from one platform/location to another. In this article, we will be discussing the database migration to the cloud.
Why migrate your database to cloud?
There are several benefits of database migration in which enterprises choose to move all their workloads and applications to the cloud. The main benefits include:
Cloud database migration tools comparison: Finding the best database migration assistant
1. AWS Database Migration Service (AWS DMS)
AWS Database Migration Service (AWS DMS) is one of the best tools for database migration. It can be used for migration of relational databases, data warehouses, NoSQL databases, as well as data stores of other types.
Using the AWS DMS, you can perform data migration to AWS, between on-premises instances, or between a combined setup of cloud and on-premises setups. To reduce the downtime of applications running on the database that are being migrated, AWS DMS keeps the source database in operational phase during the migration.
The service provides support for both homogeneous and heterogeneous migrations. This enables the consumers to migrate the database between platforms of the same vendors, as well as to the platforms of other vendors.
For instance, if an Oracle consumer is looking to migrate its database to Oracle Cloud, he can do so using the AWS DMS. Now, if the same Oracle consumer wants to migrate to Microsoft SQL Server or Amazon Aurora, this too can be done using the AWS DMS.
Amazon provides Free DMS for six months to the users migrating the database to Amazon Aurora, Amazon Redshift or Amazon DynamoDB.
For other migrations, the cost depends on the compute resources consumed during the migration, with a charge for longer-term storage of logs. Detailed pricing for Amazon Database Migration Service is available here.
2. Cloud Data Transfer: Google Cloud
Google Cloud provides a family of database migration services meant for specific requirements. These services include:
Cloud Storage Transfer Service
BigQuery Data Transfer Service
The Online Transfer service can be used to migrate the data to Google Cloud Storage using a network.
Whereas, the Transfer Alliance comes with 100 TB and 480 TB models for shipping and uploading the data to Google Cloud Storage.
BigQuery Data Transfer Service is good for scheduling and automating data transfers from the SaaS (Software as a Service) applications to Google BigQuery.
Cloud Storage Transfer Service is meant for moving the data from one cloud to another. It enables the faster import of online data into cloud storage, data center migration to cloud, as well as migration of data within the Google Cloud Storage, from one bucket to another.
Cloud Storage Transfer can also perform migration of database from other cloud storage providers to the Google Cloud Storage.
Summing up, Google’s Cloud Data Transfer is a scalable and secure database migration service, with a simple drag and drop functionality, and JSON API to allow consumers migrate data using preferred method and language.
Google Cloud charges for cloud storage on the basis of data storage, network usage, operations usage, retrieval and early deletion fees.
Azure Database Migration Service is a comprehensive and fully managed solution to migrate database from multiple sources to the cloud. It provides support for various database engines, allowing users to migrate database from on-premises, virtual machines (VMs), and other public clouds to Microsoft Azure.
Supported source database engines include:
Supported target database engines include:
Azure SQL Database
Azure SQL Database managed instance
Azure Database for PostgreSQL
Azure Database for MySQL
Azure Cosmos DB’s API for MongoDB
To guide the users throughout the migration process, Azure uses Data Migration Assistant which generates the assessment reports of all the changes, and provide suggestions to the users.
Azure Database Migration Service supports both offline and online migrations. Offline migrations face downtime right from the beginning of migration. For critical workloads that can’t afford downtime or limited downtime, Microsoft recommends the use of online migration.
Standard tier: This tier supports offline migrations and is available for free. Standard tier offers 1,2, and 4-vCore options.
Premium tier: The Premium tier of Azure DMS is billed on the basis of a predictable, hourly rate on provisioned compute in vCore. Microsoft offers 4 vCore Premium DMS for free for 6 months.
IBM Lift enables the database migration from on-premises datacentersto IBM Cloud in a faster and secure manner.
Being a pioneer in cloud services, IBM’s cloud database migration solution allows you to control all the steps of migration— extracting data from source, transporting over the wire, and loading into the target.
To minimize the downtime during migration, IBM Lift keeps the source database uninterrupted by capturing the changes to the source database and replaying them to the target database.
For secure database migration to cloud, IBM Lift uses end-to-end 256-bit encrypted connection. This helps in protecting the confidential data while it moves over the internet.
IBM Lift is available for free.
5. Alibaba Cloud Data Transmission Service (DTS)
Alibaba Cloud’s Data Transmission Service (DTS) is an easy-to-use cloud database migration tool that enables migration of database between multiple data storage types, including NoSQL, OLAP, and relational database.
Cloud database migration with Alibaba Cloud can be done for both homogeneous and heterogenous platforms. For instance, consumers can migrate data from MySQL to MySQL, as well as from Oracle to MySQL.
Alibaba Cloud DTS allows migration of data from on-premises databases to RDS or ECS, databases running on ECS to RDS, vice versa, along with migration from one RDS database to another RDS database.
For high availability, the solution continuously replicates all the changes at the source database to the target database. This helps in keeping the source database in operational mode at the time of migration.
For data migration, Alibaba Cloud Data Transmission Service is available on a pay-as-you-go basis, starting with the configuration fee of $0.158 for small instances.
To find the best cloud database migration solution, you need to perform a well-thought-out database comparison. Generally, the best solution is the one that fits your use case, budget, region, and other such components.
Which database migration platform are you looking to use for your project?
Disclaimer: The information contained in this article is for general information purpose only. Price, product and feature information are subject to change. This information has been sourced from the websites and relevant resources available in the public domain of the named vendors on 28 November 2019. Daily Host News makes best endeavors to ensure that the information is accurate and up to date, however, it does not warrant or guarantee that anything written here is 100% accurate, timely, or relevant to the website visitors.
Asia’s power landscape is set to expand, in-line with the fast-growing pace of digital transformation. Businesses and data centre operators will look towards renewable energy to meet surging energy demand arising from growing data transmission, while avoiding a corresponding increase in its environmental impact.
We interviewed Eaton’s Technology Manager – Janne Paananen and requested insights on data centre landscape and the upcoming trends in Asia and Europe, challenges for existing power infrastructure, international data standards, green data centres and more. Read on.
1. Powering Asia’s digital economy ambitions: What are some challenges for existing power infrastructure?
One of the biggest challenges today is ensuring that existing power infrastructure can meet the growing demand for energy in Asia. It’s predicted that by 2040, Asia will dominate global energy demand. This will be driven by the region’s growing Internet economy – in Southeast Asia, the Internet economy is likely to reach an astounding US$300billion by 2025.
As the region moves into an internet-enabled world where ‘always-on’ is the new norm, data centres will play an increasingly important role in maintaining the uptime of critical infrastructures and systems. To meet this surging demand, businesses will need to invest in new and robust power infrastructures, or identify cost-effective ways to retrofit legacy infrastructures.
Improving universal energy access will also continue to be a challenge for many nations in Asia, where over 700 million people live without access to electricity. As a result of factors ranging from geographic inaccessibility to a lack of resources, many communities still face challenges connecting to the main grid. Exploring the use of sustainable, off-grid solutions such as solar mini-grids and microgrids will be critical to facilitating energy access for these communities.
2. How has the data centre landscape in Asia and Europe evolved? Any upcoming trends?
Globally, the data centre market has been growing exponentially, as applications such as cloud services, data analytics and greater connectivity worldwide drive the surging demand.
In recent years, there has been increased attention on Singapore and the surrounding ASEAN region as a growing data centre hub. In 2018, APAC’s data centre market overtook the EMEA market, and is expected to become the world’s largest by 2021. The Singapore data centre market in particular, which holds about 50% of Southeast Asia’s data centre capacity, attracted US$550 million of the US$1.37 billion investment in total real estate investment between 2018 – 2019. In the face of changing local regulations around data protection and privacy as well as increasing costs, investors are also starting to look into neighbouring ASEAN countries, such as Indonesia and Vietnam for new growth opportunities.
3. How do Uninterruptible Power Supply (UPS) systems ensure business continuity and environmental sustainability when powering high-energy infrastructures?
Smart energy management in data centres is critical to business continuity. For many businesses that rely on data centres, anything less than the highest degree of uptime possible is unfavourable. Depending on the industry, outages can result in profit loss, the disruption of business operations and for organisations such as hospitals, critical life services. Uninterruptible Power Supply (UPS) systems ensure that a steady, reliable power supply keeps critical, high-energy infrastructures running.
Advancements in UPS technology is making Asia’s transition to the greater use of renewables a more achievable goal for many businesses. For instance, Eaton’s EnergyAware UPS technology equips UPS with energy management capabilities, allowing it to provide a continuous power supply to infrastructures, and balance the power system in the event of a power outage. More importantly, it has the ability to detect and stabilise fluctuations in power grids caused by the intermittent nature of renewable energy. Such technology is especially critical as the world seeks to increase the proportion of renewables in the energy mix.
4. What is the appetite for green data centres in Asia (in terms of growth potential and resilience in meeting the needs of the digital economy)?
Asia’s power landscape is set to expand, in-line with the fast-growing pace of digital transformation. According to IDC, organisations in the region are predicted to spend a staggering US$375.8 billion on digital transformation this year. Businesses and data centre operators will look towards renewable energy to meet surging energy demand arising from growing data transmission, while avoiding a corresponding increase in its environmental impact.
While Asia remains heavily reliant on coal as a primary energy source, investment in renewables has been encouraging. In 2017, Asia accounted for almost two-thirds of the global increase in renewable energy generating capacity, which will likely have a spill over effect on data centres. Some of the first steps have been taken in Singapore, where the government has established a Green Data Centre Standard to help organisations establish systems and processes to improve data centre energy efficiency, and undergoes periodic revisions to remain relevant.
As energy expenditures from running a data centre continue to increase, businesses will seek to improve energy efficiency and reduce operating costs. Demand for green data centres in Asia is therefore set to grow. This can be supported with cheaper renewable energy and improved power management tools, such as monitoring software and UPS technology to help data centres enhance efficiency and adopt more renewable energy, without compromising on power quality.
5. What are your thoughts on international standards for data centres?
International standards for data centre energy efficiency and green certifications are important in the global push towards sustainable goals, given their roles in safeguarding against energy wastage. Due to the wide range of data centre designs, it is challenging to create a common standard that can cater to different purposes, sizes, forms and environments of data centres. After all, the optimal technical solution for one data centre may not apply to others.
Nevertheless, outlining these standards and Key Performance Indicators (KPI) for data centres can help to establish best practices for the industry. Governmental regulatory bodies, businesses and data centre operators can work together to determine which standards are appropriate, and comply with them in order to work towards shared environmental sustainability goals.
Morpheus Data has rolled out the new version of its Morpheus cloud management platform, to provide customers more freedom and faster deployment in multi-cloud and container automation.
Morpheus is an enterprise-grade software platform that provides a systematic approach to cloud optimization, multi-cloud governance, DevOps automation, and application modernization. This year, it was recognizedas a leader in Gartner’s Magic Quadrant for Cloud Management Platforms.
The latest version, Morpheus v4, will bring several new capabilities like integration with Kubernetes, simplified and secure Ansible, and more.
“Enterprises want agility, but skill gaps and technology silos are standing in the way,” says Brad Parks, VP of Business Development, Morpheus Data. “Morpheus accelerates business transformation by bringing together VM automation, multi-cloud management, and Kubernetes service delivery in a single unified platform built for Dev and Ops.”
There will be an embedded and fully managed Morpheus Kubernetes Service in the platform, which will make it simpler to build, manage, and utilize the Kubernetes. Enterprises will also be able to deploy customized stacks and access Kubernetes services from Amazon Web Services (AWS), Microsoft Azure, and Google Cloud.
Addition of Kubernetes will enable enterprises to standardize and automate the provisioning of application stacks on bare metal servers, VMs, and Kubernetes clusters, the company said.
The Kubernetes Service has been validated by CNCF, which assures customers that the platform is interoperable.
The cloud management platform already provides integration with Ansible and Ansible Tower. The latest version will bring in more native capabilities that will remove the need of Tower.
Enterprises can now configure Ansible to run over the Morpheus secure agent communication bus. This will enable them to apply playbooks to instances where SSD/WinRM access might not be feasible. The new approach will support both Windows and Linux.