Cloud Cloud News

Google Cloud teams up with open source leaders in data management and analytics

Google Cloud is bringing the best of open source to its customers, by establishing partnerships with leading open-source centric companies working in data management and analytics.

These companies include Confluent, DataStax, Elastic, InfluxData, MongoDB, Neo4j, and Redis Labs.

The open source database market is growing rapidly over the years. As per a report, over 70% of the new apps built by corporate players will run on open source database management systems. Further, around 50% of the existing relational database installations developed on commercial DBMS platforms will be converted to open source platforms or are in the process of conversion.

This shows that enterprises want to adopt open source easily, and in a cloud-native way. Google’s new partnerships will help them achieve this. Further, they wouldn’t feel locked in or out while using open source technologies.

Google Cloud customers will get fully managed services that run in the cloud. These services will improve the performance and latency between the service and application.

There will be a unified UI for management of apps. Customers will be able to provision and manage their service from the Google Cloud Console. Further, they will have only one invoice from Google Cloud and Google Cloud support for most of these partners.

“We’ve always seen our friends in the open-source community as equal collaborators, and not simply a resource to be mined. With that in mind, we’ll be offering managed services operated by these partners that are tightly integrated into Google Cloud Platform (GCP), providing a seamless user experience across management, billing and support,” wrote Google Cloud in a blog post.

“This makes it easier for our enterprise customers to build on open-source technologies, and it delivers on our commitment to continually support and grow these open-source communities.”

Also read: Kubernetes 1.14 brings support for Windows containers

Additionally, Google Cloud will work with its partners to enable integrations with other GCP services like Stackdriver for monitoring and IAM.

Cloud Cloud News Newss

MariaDB unveils Enterprise Server for mission-critical workloads

The prominent database provider MariaDB has unveiled a new enterprise-grade server to provide high stability and security for mission-critical applications.

Announced at MariaDB OpenWorks conference in New York, the new MariaDB Enterprise Server comes with influential auditing, and faster and reliable backups for large databases. It will provide an end-to-end encryption for all the data in MariaDB clusters, so that production workloads can’t be compromised by anyone.

Enterprises currently use MariaDB Community Server for specific demands. The features available with new MariaDB Enterprise Server are not available with Community Server. To allow customers on older releases to access new features, the company has backported the new server to earlier supported versions.

MariaDB Enterprise Server aims to provide a strengthened database solution to organizations, for production-grade environments. It will assure quality and test performance at scale for the enterprise applications.

“We’re seeing that our enterprise customers have very different needs from the average community user,” said Max Mether, VP of Server Product Management, MariaDB Corporation.

“These customers are working on a completely different scale with a strong focus on stability and security. In order to be able to cater to these requirements, it is clear that we need to focus on a different solution by creating another version of MariaDB Server specifically focused on enterprise production workloads.”

For increased stability, the company will focus on addressing the defects in new server, whenever identified.

Moreover, MariaDB said that it has distributed the new server securely with a clearly established chain of custody. This will ensure that binaries can’t be compromised.

MariaDB Enterprise Server 10.4 is expected to be available with next version of MariaDB Platform in Spring 2019. The company will also release the GA versions of the new server (v10.2 and v10.3) in Spring.

Also read: MariaDB acquires Clustrix to advance its database platform

Additionally, MariaDB has appointed the cloud database leader Mark Porter as an advisor to the board of directors. Mark used to work with Amazon Web Services (AWS) Relational Database Service (RDS) and Oracle. He will leverage his expertise in cloud, distributed systems, and database operations at scale to advance MariaDB’s cloud strategy.

“Mark’s guidance will be a tremendous asset in building a next-generation MariaDB cloud,” said Michael Howard, CEO, MariaDB Corporation.

“Mark has a proven record of operating and scaling database services while driving rapid growth. SkySQL is designed from the ground up to offer the best MariaDB service for multi cloud, including private cloud environments. It offers enterprise product capabilities beyond the MariaDB community server, that is used widely in public clouds, to ensure quality of service, security and features otherwise only found in proprietary legacy databases.”

Image source: MariaDB

Cloud Cloud News Datacenter Newss

Google’s NoSQL database service Cloud Firestore now up for grabs

Google’s serverless, NoSQL document database service Cloud Firestore has finally become generally available for mobile, web, and internet of things (IoT) applications.

The service has been available in beta since October 2017. Cloud Firestore is a fully managed, cloud-native database service that allows developers to easily store, sync and query data for their applications.

Integrated with Google Cloud Platform (GCP), as well as Google’s mobile development platform called Firebase, the service can handle security and authorization, infrastructure, edge data storage, and synchronization. To provide improved experience to developers and simplify app development, Google has included live synchronization, offline support, and ACID transactions across hundreds of documents and collections.

For security of data, Cloud Firestore has built-in Identity and Access Management (IAM) and Firebase Auth.

With the general availability, a number of crucial programs like GCP’s deprecation policy will now be applicable to Cloud Firestore. Also, this service will now also support HIPAA compliance.

Cloud Firestore is now a part of GCP’s official Service Level Agreements (SLAs), which means that users will get high uptime for both multi-region instances and regional instances.

“Building with Cloud Firestore means your app can seamlessly transition from online to offline and back at the edge of connectivity. This helps lead to simpler code and fewer errors. You can serve rich user experiences and push data updates to more than a million concurrent clients, all without having to set up and maintain infrastructure,” wrote Amit Ganesh (VP Engineering) and Dan McGrath (Product Manager), Google, in a blog post.

Along with SLA availability, Google has also announced new lower pricing tier (up to 50% off) for most regional instances of Cloud Firestore. The new pricing will be available from March 3, 2019. The pricing will vary depending on the locations.

To help developers in monitoring Cloud Firestore read, write, and delete operations in near-real time, Google is integrating the service with Stackdriver. It is currently in beta.

Also read: Google secures DNS traffic with DNS-over-TLS support for its public DNS

More features will be added to Firestore in coming weeks, which might include querying for documents across collections and increasing database values without a transaction.

The search engine has also added new datacenter locations for hosting the Cloud Firestore data. These will include a multi-region location in Europe, and 9 more regional locations (Asia, Australia, North and South America, and Europe).

Cloud Firestore datacenter locations

Images source: Google

Cloud Cloud News Newss

MongoDB Atlas now available on Azure for free

Microsoft has announced the availability of cloud database MongoDB Atlas for free on Azure.

The MongoDB Atlas free tier on Azure will be known as M0. Users who opt to use M0 will get 512 MB of storage, which is considered ideal storage for learning MongoDB, prototyping and early development.

MongoDB is a leading database around the world, and the free tier of Atlas will run on the latest MongoDB version. Atlas can automate the time-consuming database administration tasks like provisioning, setup, upgrades, backups, etc. When users deploy the clusters on Azure, the databases are run according to operational and security best practices.

Like larger MongoDB Atlas cluster types, Microsoft mentioned that M0 clusters will be able to provide users the end-to-end encryption, high availability, and fully managed upgrades.

Further, M0 clusters will allow teams to perform several operations (like create, read, update and delete) for their data directly from browsers via the built-in Data Explorer. This will accelerate the development processes.

“This announcement is a part of our broader goal to give our customers immense choice and make it incredibly easy to get started on Azure for anyone in the world,” wrote Donald Petersen, Senior Partner Development Manager, Microsoft Azure, in a blog post.

Microsoft has integrated the MongoDB Atlas on Azure with a number of main Azure analytic services like Power BI and Microsoft Azure Databricks. Power BI integration allows users to access data stored in MongoDB and use the analytics and visualization tools of Power BI to gain insights into data. These insights can further be shared with other colleagues.

Also read: Microsoft to refresh UI and UX of Visual Studio 2019

Although MongoDB Atlas in available in 26 Azure regions, the free tier is currently available in East US (Virginia), East Asia (Hong Kong), and West Europe (Netherlands).

The MongoDB Atlas free tier can be created by signing up for MongoDB Atlas and selecting Azure as the cloud of choice.

Image source: MongoDB

Cloud Cloud News Newss

Oracle adds self-driving database cloud service to its Autonomous database portfolio

Oracle has rolled out yet another database offering to its Autonomous Database portfolio. Called Oracle Autonomous NoSQL Database, it is a self-driving database cloud service targeted for developers.

Autonomous NoSQL Database will be delivered as a fully managed service which has been designed to handle NoSQL applications requiring low latency, data model flexibility, and elastic scaling. These kind of NoSQL applications can include UI personalization, online fraud detection, gaming, shopping carts, etc.

Powered by machine learning and automation capabilities, the new offering will be a convenient answer to the workloads that need fast and predictable responses to simple operations.

It comes with APIs which will allow developers to focus on application development, rather than managing servers, storage expansion, cluster deployments, software installation, and backup.

Oracle Autonomous NoSQL Database will automatically allocate the resources required to meet dynamic workload needs, when developers specify the throughput and capacity.

For interoperability between standard relational and standard JSON data models, the service comes with a non-proprietary SQL language. Using it, developers can run the same application in cloud or on-premises without any platform lock-in.

Along with this, it comes with software development kit and support for main development languages like Python, Node.js, and Java.

“We continue to leverage our revolutionary autonomous capabilities to transform the database market,” said Andrew Mendelsohn, executive vice president, Oracle Database.

“Our latest self-driving database cloud service, Oracle Autonomous NoSQL Database, provides extreme reliability and performance at very low costs to achieve a highly flexible application development framework.”

Since the new offering will run on Oracle Cloud infrastructure, it will automate the key management processes like patching, tuning and upgrading so that critical infrastructure can run automatically.

Also read: Oracle expands autonomous cloud portfolio with new services focused on mobile development and data integration

Oracle claimed that its new offering will provide high availability and up to 70% lower cost than Amazon DynamoDB.

Image source: Oracle

Cloud Cloud News News

Developers can now undo database clusters on Amazon Aurora with new backtrack feature

Amazon Web Services (AWS) has added a new ‘undo’ feature to its Aurora database engine. Called Backtrack for Amazon Aurora, the feature will allow users to quickly move an Aurora database to a prior point in time without having to restore data from a backup.

On several occasions, the developers compose the query and let it run to the production database. But later realize that they forgot to add some clauses to it, or dropped the wrong table, or made any other mistake.

The backtrack option will help the developers in such scenarios, by allowing them to pause the application and select the point in time they want to go back to. AWS said it will allow users to retain the log information going back in time up to 72 hours.

When a backtrack will be initiated, Aurora will pause the database, close the open connections, and drop the uncommitted writes. It will then roll back database to time before the error occurred.

Once the backtrack process completes, users can resume the application and proceed as if nothing really had gone wrong. Furthermore, if the users realize that they have gone back a bit too far with the process, then they can backtrack to a later time.

“I’m sure you can think of some creative and non-obvious use cases for this cool new feature. For example, you could use it to restore a test database after running a test that makes changes to the database. You can initiate the restoration from the API or the CLI, making it easy to integrate into your existing test framework,” wrote Jeff Barr, AWS Chief Evangelist, in a blog post.

Aurora is based on distributed and log-structured storage system, hence a new log record is generated every time a change is made to database. The log record is then identified by a log sequence number (LSN). When the backtrack feature is enabled, a first-in, first-out (FIFO) buffer in the cluster for storage of LSNs is provisioned. It enables quick access and recovery times measured in seconds.

Barr further wrote that backtrack feature will be applicable to newly created MySQL-compatible Aurora database clusters and to MySQL-compatible clusters that have been restored from a backup. The backtrack cannot be enabled for a running cluster.

Also read: AWS updates Amazon S3 to reduce costs and improve performance

Backtrack feature for Aurora is now available in all AWS regions where Amazon Aurora runs.

Datacenter News

MongoDB removes need for traditional relational database with version 4.0

MongoDB, a leading database platform provider, has added support for multi-document ACID transactions to its version 4.0.

The MongoDB database platform, designed for modern and general-purpose database workloads, has always been preferred by developers for the software and applications they build due to the power and flexibility of its document model.

However, it supported ACID transactions only for the single database documents. The multi-document ACID transactions are considered critical to finalize deal terms and commerce processing. ACID (atomicity, consistency, isolation, and durability) transactions are actually a set of properties of database transactions which enable validity even in the events of power failures, errors, etc.

“Adding ACID transactions support to MongoDB removes any hesitation for developers when selecting a database. They no longer need to sacrifice speed of development over concerns that they might need transactions in a future application,” said Eliot Horowitz, CTO and cofounder, MongoDB. “Thousands of customers have shown how easy it is to use MongoDB to build applications for a wide range of mission-critical use cases. We’re excited to continue to deliver on our commitment to developer productivity by making it easier than ever to build any kind of application on MongoDB.”

Earlier, the developers who needed multi-document transactions, had to work with transactions in traditional relational databases. The support for ACID transactions will eliminate the need for organizations to go for a traditional relational database.

“MongoDB disrupted the database industry by introducing the document model, a more intuitive, flexible and performant way for developers to work with data, designed from the ground up for scalability and resiliency. As a result, MongoDB was widely adopted by developers worldwide and became the most popular modern database,” said Dev Ittycheria, President and CEO, MongoDB. “With this announcement, developers have the peace of mind of using a modern general-purpose database that can literally address any use case, and do it faster and cheaper than a traditional database.”

The database company had been working to add multi-document support for more than three years. It acquired WiredTiger storage engine, unveiled a global logical clock, refactored cluster metadata management, etc. to build consistent and durable transaction option.

Also read: Amazon and Salesforce planning to move away from Oracle database

MongoDB currently has around 5000 customers in 85 countries. The company claimed that its platform has been downloaded over 30 million times.

The MongoDB version 4.0 will be shipped in summer of 2018. As of now, it’s beta program is available.


Amazon and Salesforce planning to move away from Oracle database

Amazon seems to have made some progress toward open source database technology and is likely to stop preferring Oracle database software soon.

On Tuesday, The Information said that along with AWS, Salesforce too is venturing away from Oracle. AWS and Salesforce have been using Oracle’s software for a long time, since it was one of the limited series options to build web-scale cloud computing business.

Last month, Oracle CTO Larry Ellison was asked if customers were moving away from their database, and this is what he replied- “Let me tell you who’s not moving off of Oracle, a company you’ve heard of that gave us another $50 million this last — this quarter to buy Oracle Database and other Oracle technologies. That company is Amazon. They’re not moving off of Oracle. Salesforce isn’t moving off of Oracle.”

Amazon had been trying to get rid of Oracle since 2004 when a database migration outage happened, and Amazon faced some downtime. Ellison’s statement might have added fuel to the fire.

The anonymous sources stated that Salesforce and AWS are developing their own database software. Salesforce is developing a new database called Sayonara while AWS might move toward open-source technology- NoSQL.

Oracle is supporting many major cloud providers with its database technology, and if AWS and Salesforce find some independent solution, the chances are that other Oracle database customers too might move away from it.

Also read: Nvidia prohibits datacenter deployment of GeForce GPUs

It will be interesting to see how AWS and Salesforce tackle with the database matter, because Salesforce had once tried to move away from Oracle in 2013, but eventually had to team up again.


Gartner recognized Microsoft a leader in OPDBMS market for third time

Microsoft has been recognized a winner in Gartner’s latest Magic Quadrant for Operational DBMS for its extensive vision and ability to execute in the operational DBMS market.

Microsoft serves SQL Server DBMS and Azure SQL Database (a DBMS PaaS based on SQL Server) in the OPDBMS market, and these products have made Microsoft the leader in respective market for the third time.

Microsoft also markets Azure Cosmos DB, a DBMS PaaS solution compatible with Azure tablets, MongoDB, SQL, and Gremlin Graph APIs.

The OPDBMS market includes both relational and nonrelational DBMS products that are appropriate for a wide range of enterprise-grade transactional applications. The OPDBMS products include the purchased business applications like CRM, IoT, ERP, security event management, and custom transactional systems.

Microsoft has been investing heavily in the Operational Database Management Systems market to deliver more features and align Azure database platform and provide more value to customers who pay for software assurance. Microsoft is rapidly developing its Data and Analytics portfolio to modernize the data estate.

It had launched SQL Server 2017 that brought the power of SQL Server to Windows, Linux and Docker containers for the first time. Developers can discover new relationships using graph data management and analysis that helped them detecting fraud interactions.

The free Developer Edition of SQL Server and Database Migration Service help developers port the SQL Server and Oracle databases to its Azure SQL Database.

SQL Server has been the most secure DBMS in the industry, offering a layered protection approach with innovative technologies like Always Encrypted.

Also read: Microsoft launches SQL Server 2017 with Linux and Docker support

Microsoft had received the highest reference customer scores (25% within a year) among all the vendors for overall experience, value for money, meeting needs etc.


Amazon Aurora now available with PostgreSQL compatibility

The public cloud computing leader, Amazon Web Services (AWS), announced the general availability of Amazon Aurora with PostgreSQL compatibility. Amazon Aurora is the database engine of AWS that merges the speed and availability of high-end commercial databases with simplicity and cost-effectiveness of open source databases. 

Commercial databases provide high performance but they’re expensive and complex to manage. Open source databases cost less but customers don’t achieve the performance of commercial databases. Customers using Amazon Aurora with PostgreSQL will get several times better performance as compared to the typical PostgreSQL database. The new database offers scalability, durability, and security capabilities for ten times lesser price than the commercial grade databases.  

“When we made Amazon Aurora available in 2015, for the first time, customers had a cost-effective and high-performance alternative to commercial databases like Oracle and SQL Server—and this is a big part of why Amazon Aurora is the fastest-growing service in the history of AWS,” said Raju Gulabani, Vice President, Databases, Analytics, and Machine Learning, AWS. “While we’ve been amazed at the growth of Amazon Aurora’s MySQL-compatible edition, many of our enterprise customers anxious to move on from their old-world database providers have been waiting for Amazon Aurora’s PostgreSQL-compatible edition to launch into general availability. We’re excited to help these customers take another step toward database freedom.” 

Customers can now pay the charges hourly for each Amazon Aurora database instance they use. There is no need to make any upfront investment, and the storage capacity can be scaled automatically without any downtime or performance degradation.  

Amazon Aurora with PostgreSQL can support up to 64 terabytes of storage. The workloads can be migrated from another database to Aurora using AWS Database Migration Service, which is free of charge for next 6 months. 

Also read: AWS makes a huge change to its cloud pricing 

The new service is compatible with PostgreSQL version 9.6.3, and is currently available in US East in North Virginia, US East in Ohio, US West in Oregon and EU region in Ireland, with more to follow.