Categories
Articles

NoSQL Database Comparison – Alibaba Cloud, AWS, Google Cloud, IBM and Microsoft

Data is everywhere around us and we interact with it regularly. Whether you’re checking out the latest model of a smartphone or buying groceries online, you are interacting with data in one way or the other. We have been dealing with data for ages, what has changed now is the scale of data produced and the speed at which it is accessed.

Thanks to digital technologies like cloud, IoT (Internet of things), AI (Artificial Intelligence), machine learning, and more, companies are producing data at an exponential rate. This amount of data collected around the globe is too hefty to process.

According to a report, “People are generating 2.5 quintillion bytes of data each day.”

Here, traditional relational databases like SQL might not offer the required scalability and performance to process large amounts of data. Though relational databases are still relevant, alternative databases like NoSQL come with their own sets of advantages.

This article will help you understand:

.What is NoSQL database?

NoSQL stands for ‘not only SQL’. Big infrastructure providers like Google, Amazon, and Facebook recognized the scalability issue of SQL databases and hence introduced their alternative solutions like Bigtable, DynamoDB, and Cassandra, respectively to meet the changing demands.

NoSQL is an approach to database design. It can accommodate a wide variety of data models. Some of these include key-value, columnar, document, and graph formats. It offers improved scalability, performance, reliability, and consistency as compared to schema-based traditional relational databases.

The NoSQL databases are purpose-built to work with large sets of distributed data. They mostly refer to the databases built in the early 2000s for large-scale clustering of data produced by cloud and web applications.

The need for performance and scalability in cloud-based web applications outweighs the rigid data consistency that traditional relational database management systems provide.

NoSQL database is a form of unstructured storage. They do not have any fixed table structure – one important trait that differentiates them with the common relational databases.

  • NoSQL databases have a flexible schema. There can be different rows having different attributes or structure.
  • They work on a BASE model – Basically Available, Soft state, Eventual consistency.
  • In NoSQL, queries may not always see the latest data. Thus, consistency is guaranteed after some period.

.The advantages of NoSQL databases

  • NoSQL databases have a simpler structure without a schema and are flexible.
  • They are based on key-value points. This means that records are stored and retrieved using a key that is unique to every set of record.
  • NoSQL database can also have column store, key-value graph, document store, object store and other popular data store modes. Thus, they are multi-purpose as well.
  • Open-source NoSQL databases don’t require any expensive licensing fees.
  • They are easily scalable whether you are using an open-source or a proprietary solution. This is because the NoSQL databases can scale horizontally to distribute the load on all nodes. In SQL, this is done by replacing the main host with a higher capacity one, i.e. via vertical scaling.

Now when you know what NoSQL is and what are its advantages, it is time to look at some of the top NoSQL database solutions offered by leading service providers like AWS, Google Cloud, IBM, Alibaba Cloud, and Microsoft. We will also be taking a look at a NoSQL database comparison table for understanding the key differences.

.Comparison between NoSQL database solutions

1. Alibaba Cloud Tablestore

Tablestore is a fully managed NoSQL cloud database service offered by Alibaba Cloud. It can store a large amount of structured and semi-structured data using a variety of data models. Users can use Tablestore database to query and analyse data. Users can also migrate heterogeneous data to this database without any interruptions. With elastic resources and pay-as-you-go billing, it is an efficient and low-cost database management system. It offers high-consistency and service availability with globally spread data centres. Furthermore, distributed architecture and single table auto-scaling makes it highly elastic.

  • It is a fully managed database service. Users can simply focus on business research and development activities instead of worrying about hardware and software presetting, faults, configurations, security, etc.
  • With in-built shards and load balancing, Tablestore can automatically adjust the size of partitions, allowing users to store more data.
  • It creates multiple backups of data and stores them in different server racks.
  • It also offers consistency across three backups. The application can quickly read the data.
Source: Alibaba Cloud
2. Amazon DynamoDB

Amazon DynamoDB is a fast and flexible NoSQL database service that can deliver single-digit millisecond performance at any scale. It is a key-value and document database that is multi-region, fully managed, and durable.

The multi-master database is backed with in-built security, backup, restore, and in-memory caching for internet-scale applications.

  • DynamoDB is built to support the world’s largest-scale applications.
  • Users can build applications with unlimited throughput and storage.
  • The data stored in the database is replicated across multiple AWS regions. This allows local access to data for globally distributed applications.
  • Another great advantage of DynamoDB is that it is serverless. Users have no servers to manage or provision.
  • The database is designed to scale automatically up and down as per the system and capacity requirement.
  • It also supports ACID (Atomicity, Consistency, Isolation, and Durability) transactions – making it an enterprise-ready database.
Reference Architecture of a Weather Application, Source: Amazon Web Services
3. Azure Cosmos DB

Azure CosmosDB is a NoSQL database service by Microsoft that is globally distributed and multi-model. It allows users elastically and independently scales workloads with a click of a button.

Users can also take advantage of fast, single-digit-millisecond data access with the help of APIs like Cassandra, SQL, MongoDB, Gremlin or Tables. It provides comprehensive service level agreements (SLAs) for latency, throughput, availability, and consistency guarantees.

  • With globally spread Azure regions, users can build highly responsive and highly available applications worldwide.
  • It provides 99.999% availability for both write and read actions. It is deeply integrated with Azure Infrastructure and Transparent Multi-Master replication.
  • It offers unprecedented elastic scalability through transparent horizontal partitioning and multi-master replication.
  • Users do not need to deal with index or schema management as the database engine is fully-schema-agnostic.
Source: Microsoft
4. Google Cloud Bigtable

Cloud Bigtable is the fully managed and scalable NoSQL service by Google Cloud. It is best suited for large analytical and operational workloads. It allows users to store terabytes or even petabytes of data. It is ideal for storing large amounts of single-keyed data with very low latency. Cloud Bigtable stores data in scalable tables. These tables are composed of rows and columns. Each of the rows describes a single entity and are indexed by a single row key.

Data stored inside the Cloud Bigtable database is completely secure. The access to the data is controlled by Google Cloud project and the Identity and Access Management (IAM) roles. It also allows users to save a copy of schemas and data as backups.

  • Cloud Bigtable database is designed to scale in direct proportion to the number of machines in a cluster.
  • It can handle upgrades and restart automatically.
  • Users can also increase the size of a Cloud Bigtable cluster for a few hours to manage any large loads.

It is ideal for time-series data, marketing data, financial data, internet of things’ data, and graph data.

IoT use case reference architecture, Source: Google Cloud
5. IBM Cloudant

IBM Cloudant is a fully managed database service designed for hybrid multi-cloud applications. It is built on open-source Apache CouchDB and has a fully compatible API that allows data syncing to any cloud or the edge.

It is a distributed database service that can handle heavy workloads of large, fast-growing web and mobile apps. It is available as an SLA-backed and fully managed IBM Cloud service. Users can also download the service for on-premises installation.

  • It allows users to instantly deploy an instance, create a database, and independently scale.
  • It is ISO 27001, SOC 2 Type 2 compliant and HIPAA ready.
  • With 55+ data centres across the world and globally spread IBM Cloud regions, users can seamlessly distribute data across zones, regions, and cloud providers.
  • The service is compatible with Apache CouchDB, enabling users to access a wide variety of language libraries and rapidly build new applications. Thus, the service boasts of zero vendor lock-in.
AI use case, Source: IBM

Note: The services mentioned above in the NoSQL database comparison have been listed in alphabetical order.

.Tabular Comparison

NoSQL Database Comparison: DynamoDB Vs Bigtable Vs Cloudant Vs Tablestore Vs Azure CosmosDB

Comparison Points DynamoDB Cloud BigTable Cloudant Tablestore Azure CosmosDB
Developed By Amazon Web Services (AWS) Google IBM Alibaba Cloud Microsoft
Primary Database Model
  • Document Store
  • Key-Value store
Wide Column store Document Store Wide Column Store
  • Document store
  • Graph Store
  • Key-Value Store
  • Wide Column Store
Initial Release 2012 2015 2010 2016 2017
 

License

 

 

Commercial Commercial Commercial Commercial Commercial
Cloud-based Yes Yes Yes Yes Yes
Data Schema Schema-free Schema-free Schema-free Schema-free Schema-free
Server OS Hosted Hosted Hosted Hosted Hosted
Supported Programming Languages*
  • NET
  • Ruby
  • Erlang
  • ColdFusion
  • Groovy
  • Java
  • JavaScript
  • PHP
  • Perl
  • Python
  • C#
  • Go
  • C++
  • Java
  • JavaScript (Node.js)
  • Python
  • C#
  • Java
  • JavaScript
  • Objective-C
  • Ruby
  • PHP
  • Java
  • Python
  • .Net
  • C#
  • Java
  • JavaScript (Node.js)
  • Python
  • MongoDB Client drivers
Consistency
  • Eventual Consistency
  • Immediate Consistency
  • Immediate Consistency (for single clusters)
  • Eventual Consistency (for two or more replicate clusters)
Eventual Consistency Immediate Consistency
  • Bounded Staleness
  • Consistent Prefix
  • Eventual Consistency
  • Immediate Consistency
  • Session Consistency
Durability Yes Yes Yes Yes Yes
Partitioning Methods Sharding Sharding Sharding Sharding Sharding
Use Cases*
  • Ad Tech
  • Gaming
  • Retail
  • Banking and Finance
  • Media and Entertainment
  • Software and Internet
  • Financial Analysis
  • Internet of Things (IoT)
  • AdTech
  • Web and mobile apps
  • AI solutions
  • IoT apps
  • Social IM
  • Gaming
  • Finance
  • IoT
  • Logistics
  • Mission-critical applications
  • Real-time retail services
  • Real-time IoT device telemetry

 

Supported Data Types*
  • Scalar: Number, String, Binary, Boolean, and Null
  • Multi-Valued: String set, Number Set, and Binary Set
  • Document: List and Map
Treats all data as raw byte strings for most purposes NA
  • String
  • Integer
  • Double
  • Boolean
  • Binary
NA
Latency Microsecond latency with DynamoDB Accelerator (DAX) Consistent sub-10ms latency NA Low latency Read latency for all consistency levels is guaranteed to be less than 10 milliseconds at the 99th percentile
Replication Automated Global Replication Yes NA NA Transparent multi-master replication
Triggers Yes No Yes No JavaScript
Support for ACID transactions Yes Atomic single-row operations No Atomic single-row operations Yes
Data Encryption Yes Yes (data encrypted at rest) Yes (Data encrypted at rest) NA Yes (Data encrypted at rest)
Backup and Restore On-demand backup and restore Available CouchBackup for snapshot backup and restore Custom backup and restoration Automatic and Online Backups
MapReduce No Yes Yes No Yes (with the help of Hadoop Integration)

Points marked asterisk (*) define an inclusive list in the above NoSQL Database Comparison Table

Suggested Reading: Relational Database Comparison – Alibaba, Amazon, Google, IBM and Microsoft

.Picking the right NoSQL database service – tips

NoSQL database services include a wide and comprehensive set of feature-rich solutions to help you build better applications. However, you should not pick a database just because it offers a lot of features. You need to decide what your application and business needs are. Also, you need to consider factors like vendor-lock in to avoid being stuck with a single service provider.

While all the major NoSQL databases we discussed are popular and enterprise-ready, here are a few things you might want to consider when picking a NoSQL service:

  • Define your database goals: Whether you want to store data as a record; build interactive applications requiring real-time data processing; store data for a backend customer application, etc.
  • Consider the security of data: When you trust a service provider with your data, you must ensure that your data is not compromised and is safe and readily available when required.
  • Consider latency: Latency defines the time taken for a web application to respond to a user’s query. For customer-facing applications, you should consider a database that offers the lowest latency.
  • Consider the hosting choice: You can either go for a self-hosted or a managed database service. Again, it depends on your application requirement. For complex and mission-critical applications, managed services come handy.

We hope our NoSQL Database comparison will help you make the right choice.

Feel free to share your queries and feedback through the comment section below.

Disclaimer: The information contained in this article is for general information purpose only. Price, product and feature information are subject to change. This information has been sourced from the websites and relevant resources available in the public domain of the named vendors on 4th September 2020. DHN makes best endeavors to ensure that the information is accurate and up to date, however, it does not warrant or guarantee that anything written here is 100% accurate, timely, or relevant to the website visitors.

Sources:

Categories
Cloud News

Mission Secures $15 Million in Additional Funding from Great Hill Partners to Fuel Organic Growth Through 2020

Achieving unmatched success creating additional customer opportunities with AWS in the U.S. in 2019, Mission will put new capital towards expanding its team, technology, system, and go-to-market strategies.

Mission, a managed services and consulting company for Amazon Web Services (AWS), today announced it has closed an additional $15 million of equity funding from Great Hill Partners, a leading growth-oriented private equity firm. Mission will use this capital to further accelerate the AWS managed service provider’s organic growth through 2020.

The fresh round continues Great Hill Partners’ investment in Mission; the Boston-based firm initially committed up to $75 million to company founder and CEO Simon Anderson to launch Mission in 2017. To date, approximately $40 million of that capital has been invested to establish Mission’s current standing as a trusted, award-winning provider of AWS managed services and consulting. Mission has rapidly built a coast-to-coast geographic presence, a leadership team made up of cloud industry veterans, and a customer base of more than 200 businesses from across a particularly wide breadth of industries and AWS use cases.

This year, Mission has established itself as a top partner to AWS in its U.S. Territory segment – measured by the shared customer growth opportunities Mission has created with AWS. With hubs in Los Angeles, San Francisco, Boston, and New York, Mission has fully staffed teams of AWS solutions architects and sales executives in three major AWS customer geographies: Southern California, Northern California, and the Northeast. Mission is also currently adding a new hub in Chicago to best serve the AWS U.S. Central region.

“Mission continues to prove itself as an exceptionally capable provider of managed cloud services,” said Drew Loucks, Principal, Great Hill Partners. “The company has really made its mark in a rapidly growing market, with enterprises across verticals eager to realize the tremendous benefits of migrating to the cloud and harnessing all that AWS has to offer. Mission has also been effective at bringing in the right expertise required to make their customer engagements so successful, and our additional investment will ensure accelerated growth throughout this year and next.”

With the new capital, Mission will build on its foundation as a customer-focused cloud transformation business, continuing to invest in and expand its team of expert cloud consultants and solutions architects that now hold nearly 100 AWS certifications. Mission will also invest in the technologies, systems, and go-to-market strategies that best align the provider with overall AWS growth and customer demands. Beyond this $15 million round, Great Hill Partners and other limited partners have additional capital available for further investment in Mission, as needed, to fund acquisitions and continue to reinforce Mission’s staff of AWS experts.

“Mission continues to experience strong momentum through 2018 and into 2019,” said Stewart Armstrong, CFO, Mission. “We remain disruptive in the market as we strive to continue to deliver unique value to our customers. The Great Hill team continues to be a valuable partner for Mission, and I’m excited to put this additional funding to work as we expand and optimize our go-to-market strategies and accelerate the deployment of new, world-class technologies and services to our customers. We remain focused on helping more organizations achieve their cloud goals, no matter how challenging, with our expertly-orchestrated AWS management and optimization.”

ALSO READ: AWS is the most preferred cloud platform among Python developers: Survey

Categories
Datacenter

Amazon invests heavily on India datacenter to meet data localization norms

Amazon is increasing its investment in data center infrastructure in India to more effectively meet the data localization policies of the country.

Government of India is adopting stronger data protection laws for the IT and e-commerce companies that operate in the country. Last year, the government had made it mandatory for all the companies to store the financial data of Indian users in India only.

The aim of data localization is to protect the data and information of citizens against identity thefts, data breaches, etc.

Amazon has its data center in Mumbai with two availability zones. This is an AWS Asia Pacific region that has been developed to meet compliance standards and offer high levels of security to AWS customers.

The company has invested around Rs 1,380 crore (around $198 million) into its data services arm, as per the documents by Paper.vc. This datacenter delivers cloud computing services for the purpose of data storage, hosting, and data protection.

Along with its e-commerce business, the company also operates Amazon Data Services India and Amazon Internet Services.

The recent investment in the data center in India will help Amazon to establish a stronger presence in the country and compete effectively against its cloud rivals like Microsoft and Alibaba Cloud.

“My bet is on Amazon seeing a massive opportunity that only it can effectively leverage with the new data localizations norms in India. One can expect to see new data center locations coming up alongside India-specific localization solutions,” said Vivek Durai, founder of Paper.vc.

Also read: Top 10 best data center service providers in India 2019

Earlier this year, Oracle also announced its first data center in India, claiming that it is the sixth biggest country for them in terms of revenue.

Categories
Newss

Red Hat collaborates with public cloud giants to launch Kubernetes marketplace

Red Hat has collaborated with Amazon Web Services (AWS), Google Cloud, and Microsoft to launch a common public registry for Kubernetes-native services.

Called OperatorHub.io, the new registry will work as a Kubernetes marketplace to help users find and publish services backed by Kubernetes Operators.

Kubernetes is one of the most-used container orchestration tools, witnessing the highest growth this year. However, some enterprises and developers still face challenges in adopting this tool.

The OperatorHub.io will address these challenges and make it easier for everyone to use Kubernetes.

Red Hat has chosen the Operator Framework for the new Kubernetes repository. The Operator Framework is an open source toolkit that provides a software development kit (SDK), lifecycle management, metering and monitoring capabilities. It allows developers to build, test and publish Operators.

The Operators are a way to package, deploy, and manage a Kubernetes-native application. Red Hat said that an Operator can automate the routine and complex tasks which are used for an application to run on Kubernetes.

“Use of Kubernetes Operators is growing both inside Microsoft and amongst our customers, and we look forward to working with Red Hat and the broader community on this important technology,” said Gabe Monroy, Lead Program Manager, Containers, Microsoft Azure.

Simply said, the OperatorHub.io is a marketplace, and the Operators are the tools available in it.

As of now, there are 12 Operators available in OperatorHub.io. These Operators include Amazon Web Services Operator, Couchbase Autonomous Operator, CrunchyData’s PostgreSQL, etcd Operator, Jaeger Operator for Kubernetes, Kubernetes Federation Operator, MongoDB Enterprise Operator, Percona MySQL Operator, PlanetScale’s Vitess Operator, Prometheus Operator, and Redis Operator.

Also read: IBM marks one of the most significant tech acquisition, buys Red Hat for $34 billion

“At Google Cloud, we have invested in building and qualifying community developed operators, and are excited to see more than 40 percent of Google Kubernetes Engine (GKE) clusters running stateful applications today. Operators play an important role in enabling lifecycle management of stateful applications on Kubernetes,” said Aparna Sinha, Group Product Manager, Google Cloud.

“The creation of OperatorHub.io provides a centralized repository that helps users and the community to organize around Operators. We look forward to seeing growth and adoption of OperatorHub.io as an extension of the Kubernetes community.”

Categories
Cloud Cloud News

Veeam expands partnerships with AWS, Azure and IBM Cloud

After landing a $500 million investment from Insight Venture Partners, the leading data management firm Veeam has announced a number of new capabilities for the latest version of Veeam Availability Suite.

Veeam Availability Suite 9.5 Update 4, the latest version, will provide the customers data retention at a lower cost, easier cloud migration and data mobility, and cloud-native backup and protection for AWS. The solution is also going to offer portable cloud-ready licensing, and increased security and data governance. Service providers will now find it easier to deliver Veeam-powered services to customers.

These new capabilities will also be coming to the upcoming Veeam Availability for AWS and Veeam Availability Console v3. The company has expanded its relationships with Amazon Web Services (AWS), Microsoft Azure, IBM Cloud, and over 20,000 service providers.

“Veeam was born and has dominated the backup and data management market in the modern highly virtualized on-premises environments. In the last few years, we have continued our tradition of innovation and evolved to become the leader in Cloud Data Management,” said Ratmir Timashev, co-founder and Executive Vice President of Sales & Marketing at Veeam.

Veeam Availability Platform will now leverage Veeam Availability Suite 9.5 Update 4, Veeam Availability for AWS, Veeam Instance Licensing (VIL), and Veeam Availability Console v3. These major capabilities will provide customers agility, availability, and business acceleration.

“Our latest version of Veeam Availability Platform is one of our most important and anticipated releases to date, providing simple, flexible and reliable solution to help customers migrate to and keep data available in the hybrid cloud regardless of its location,” added Timashev.

The updated Veeam Availability Suite will include cloud tier, cloud mobility, enhanced Veeam DataLabs, intelligent diagnostics, and enhanced Veeam Cloud Connect Replication for Service Providers (VCC-R).

“Today’s announcements reinforce our position as a market leader in Cloud Data Management by expanding our strong relationships with AWS, Microsoft Azure, IBM Cloud, and over 20,000 service providers. We are recognized as one of Forbes 2018 World’s Best 100 Cloud Companies and have been previously recognized twice as Microsoft ISV Partner of the Year – clear proof points of leadership in cloud data management – and this latest launch strengthens our position,” concluded Timashev.

Also read: Veeam makes its hyper-availability solutions available with Lenovo SDI and SAN offerings

The company mentioned that the new and enhanced offerings will further Veeam’s strategy to provide simple, flexible, and reliable solution to customers.

Categories
Cloud Cloud News

Amazon Web Services unveils fully-managed and centralized backup service

With a new backup service, Amazon Web Services (AWS) is making it easier and faster for enterprises to back up their data across AWS services and on-premises.

Called AWS Backup, it is a fully-managed and centralized backup service that will help enterprises to easily meet regulatory backup compliance requirements.

Today, organizations are increasingly shifting their applications to cloud. The data is becoming distributed across distinct services, like databases, block storage, object storage, and file systems.

AWS Backup will allow enterprises to configure and audit the resources they backup using a single service. Whether the data is distributed in storage volumes, databases, or file systems, the new service will allow enterprises to audit and configure everything from a single place.

It will also automate backup scheduling, allow users to set retention policies, and monitor recent backups and restores in one place, AWS said.

“As the cloud has become the default choice for customers of all sizes, it has attracted two distinct types of builders. Some are tinkerers who want to tweak and fine tune the full range of AWS services into a desired architecture, and other builders are drawn to the same breadth and depth of functionality in AWS, but are willing to trade some of the service granularity to start at a higher abstraction layer, so they can build even faster,” said Bill Vass, VP of Storage, Automation, and Management Services, AWS.

“We designed AWS Backup for this second type of builder who has told us that they want one place to go for backups versus having to do it across multiple, individual services. Today, we are proud to make AWS Backup available with support for block storage volumes, databases, and file systems, and over time, we plan to support additional AWS services.”

Also read: Amazon reportedly acquiring CloudEndure for $250 million

AWS has integrated the new service with Amazon DynamoDB, Amazon Elastic Block Store (Amazon EBS), Amazon Elastic File System (Amazon EFS), Amazon Relational Database Service (Amazon RDS), and AWS Storage Gateway. The public cloud giant is planning to integrate more services in future.

Categories
Cloud Cloud News

Amazon reportedly acquiring CloudEndure for $250 million

Amazon is reportedly acquiring the cloud computing company CloudEndure in a deal worth $250 million.

As per the sources, the deal has already happened, but both the companies are yet to comment on the confirmation of the deal.

Based in Israel, CloudEndure provides disaster recovery, continuous backup, and live migration services for physical virtual and cloud-based source to Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), VMware, Oracle Cloud, and OpenStack.

Since its launch in 2012, CloudEndure has raised over $18 million from Dell EMC, VMware, Mitsui, Infosys, and Magma Venture partners.

If the acquisition happens, a thing to wonder is whether the public cloud giant will continue to support the other cloud providers on CloudEndure.

Addition of CloudEndure to the data protection and backup services of AWS can play a significant role for the company. Although it is not clear what Amazon is going to do with the disaster recovery startup.

It seems that AWS is focusing on security and compliance services. Last year, the company had acquired the Cambridge-based cybersecurity startup Sqrrl to bolster its public cloud security. Sqrrl provides Sqrrl provides a Threat Hunting platform which utilizes link analysis, machine learning, and multi-petabyte scalability. The solution helps in detecting the advanced threats faster.

Also read: AWS brings VMware Cloud on AWS in customer’s own data center with Outposts

According to the sources cited to Globes, the deal is expected to close in the coming days.

Categories
Cloud Cloud News

Cisco and AWS launch Kubernetes-powered hybrid cloud solution

Cisco has announced a new hybrid cloud platform that will make it easier for enterprises to run new containerized applications across all environments. The new platform will be powered by Kubernetes, and has been built for Amazon Web Services (AWS).

Called Cisco Hybrid Solution for Kubernetes on AWS, the new solution integrates Cisco’s networking, security, management and monitoring software with AWS’ cloud services.

It configures on-premises Kubernetes environments to be consistent with Amazon Elastic Container Service for Kubernetes (EKS), the companies said. The Amazon EKS is a managed Kubernetes service for managing software containers, which became generally available in June this year.

Today, applications have become the lifeblood for enterprises. Hence, enterprises are looking to develop and deploy applications across public and private clouds, without any obstruction. If they build the applications easily, get them up and run quickly, if can provide huge competitive advantage.

Cisco Hybrid Solution for Kubernetes on AWS aims to allow developers deploy and manage containerized applications more easily across on-premises and the AWS cloud. It will allow them focus on building and using applications, speed up innovation and reduce time to market.

“Today, most customers are forced to choose between developing applications on-premises or in the cloud. This can create a complex mix of environments, technologies, teams and vendors. But they shouldn’t have to make a choice,” said Kip Compton, senior vice president, Cloud Platform and Solutions at Cisco.

“Now, developers can use existing investments to build new cloud-scale applications that fuel business innovation. This makes it easier to deploy and manage hybrid applications, no matter where they run. This allows customers to get the best out of both cloud and their on-premises environments with a single solution.”

The new solution will come with a common set of tools for on-premises and AWS, which will simplify the management of on-premises Kubernetes infrastructure. It will help IT operations team to reduce complexity and costs.

Cisco Hybrid Solution for Kubernetes on AWS will also enable containerized applications to work with existing resources and production environments. This provides another advantage to both developers and IT operation teams.

“More customers run containers on AWS and Kubernetes on AWS than anywhere else,” said Terry Wise, Global Vice President of Channels & Alliances, Amazon Web Services, Inc.

“Our customers want solutions that are designed for the cloud and Cisco’s integration with Amazon EKS will make it easier for them to rapidly deploy and run containerized applications across both Cisco-based on-premises environments and the AWS cloud.”

Also read: Cisco acquires Duo Security for multi- and hybrid-cloud security

The new solution is expected to be available in December 2018. Cisco will provide the solution as a software solution requiring only Cisco Container Platform, or as hardware/software solution with Cisco Container Platform running on Cisco HyperFlex.

Categories
Cloud Cloud News News

Reliam, Stratalux and G2 Tech Group merge into Mission

Mission, a managed services and consulting company for cloud platforms including Amazon Web Services (AWS) and Microsoft Azure, debuts today as the new company born from the recent mergers of AWS Partner Network (APN) Advanced Consulting Partners Reliam, Stratalux, and G2 Tech Group.

Since these mergers, completed earlier in 2018, the combined organization has grown revenue sixfold, doubled its customer base to 175 enterprises, and more than tripled its headcount.

Operating under the tagline “Mission – Let’s Achieve Yours,” Mission promises a uniquely immersive approach to understanding each customer’s specific cloud needs and long-term vision, and then expertly harnesses the power of the cloud to achieve those goals.

Mission commands deep expertise in the architecture, migration, management, performance, and cost-optimization of cloud environments, and applies that acumen with a passionate belief in cloud technologies as the greatest tool for delivering effective and lasting business transformation.

The result is a company that teams with organizations across industries to help them get where they need to go, and to help them develop more agile and more creative applications faster than ever.

Building on the foundation of three well-established and thriving cloud service providers, Mission now features coast-to-coast geographic coverage with offices in Los Angeles and Boston.

Mission’s uncommon breadth of AWS technical and strategic proficiencies includes more than 90 AWS certifications held among the company’s cloud operations professionals and solutions architects. Mission is also an APN Advanced Consulting Partner who has achieved Competency Partner status in DevOps, Healthcare, and Life Sciences.

The company has particularly deep domain expertise across the media and entertainment, mobile gaming, digital media, education, consumer goods, marketing, ecommerce, and Software-as-a-Service (SaaS) verticals.

With the new brand comes an updated and more robust cloud services structure. The core of Mission’s managed services – built to free organizations from the burden of cloud management and optimization so they can focus on building their applications and growing their businesses – includes:

  • Managed Cloud: 24x7x365 support, adherence to current best practices, automated maintenance, advanced monitoring, managed backups, expert consultation.
  • Managed DevOps: IT roadmapping, accelerated deliveries, advanced and rapid support, CI/CD, long-term consulting.
  • Managed Security: real-time data monitoring, compliance enforcement, data archiving, AI/ML protection, attack mitigation.
  • Managed Application Performance: real-time metrics, proactive performance management, alert response.
  • Professional Services: AWS Well-Architected Reviews, migrations, configuration management, architecture, best practices implementation, cloud transformations, strategic consulting.
  • Cloud Optimization: detailed spend attribution, best practices benchmarking, cost optimization, simplified invoicing.

“We have a vision to guide and support organizations worldwide to confidently embrace and leverage the transformative power of cloud computing, and to do good for themselves, their teams, their customers, and their stakeholders,” said Simon Anderson, CEO, Mission.

“The next twenty years of global development will require rapid innovation, deep insights, and flexible, cost-effective cloud platforms that deliver rapid, tangible value to all. Mission will play a leading role in partnering with your company to design your future value, and to achieve your goals.”

Also read: Reliam Adds G2 Tech Group to Bolster AWS Managed Services Expertise and Provide Customers with Coast-to-Coast Geographic Coverage

Customers can connect with Mission and explore its offerings – including free on-demand consultations with AWS-certified solutions architects – at missioncloud.com

Categories
Cloud Cloud News

AWS announces EC2 compute instances for its Snowball Edge devices

Amazon Web Services (AWS) has introduced EC2 compute instances for AWS Snowball Edge devices.

The EC2 instances are virtual servers in Amazon’s Elastic Compute Cloud service. These compute instances on Snowball Edge devices combines 100 terabytes of storage and an Intel Xeon D processor running at 1.8 gigahertz in a portable and ruggedized device.

AWS had launched Snowball Edge devices to help companies easily migrate huge workloads from on-premises infrastructure to cloud. It also allows customers to create object storage pools and run Lambda automated compute functions to handle tasks when the systems are on-premises.

The new EC2 instances on Snowball expand the functionality of Snowball devices. AWS said that Snowball devices now support any combination of instances that consume up to 24 virtual CPUs and 32 GiB of memory.

These devices can be used for collecting and processing data from on-premises to cloud in hostile environments where the internet connections are limited or non-existent. It will be helpful for companies in manufacturing and other industries where the data often can’t be processed to cloud because of slow internet connectivity.

Customers can use Snowball Edge devices on-premises for as long as they like. AWS will bill the customers for a one-time setup fee for each job. Following ten days of usage, customers will have to pay an additional per-day fee for each device. These devices also come with one or three-year plans.

The new Snowball Edge devices are now generally available.

Also read: Amazon’s Elastic Service for Kubernetes is ready for prime time

On the same line, the public cloud giant announced three new instance types that are under work and will be available soon. The new instances— Z1d, R5, and R5d, will enable customers to choose instance types that best matches their applications.

For example, the Z1d instances are ideal for Electronic Design Automation (EDA), relational database workloads, and HPC workloads. On the other hand, the R5 instances have been built to support high-performance databases, distributed in-memory caches, in-memory analytics, as well as big data analytics.

Page 1 of 5
1 2 3 5