Basho is changing the way customers build their Big Data, IoT, and hybrid cloud applications. As I alluded to in my last post, Basho collaborates with many of the world’s largest companies helping them ensure their applications are highly available, massively scalable, and operationally easy to use. With these guiding principles in mind, we have developed Basho Data Platform to simplify the complex technology stack required in the data tier for Enterprise active workloads. These workloads span technologies including NoSQL databases, caching, real-time analytics, and search to meet the demanding needs of Enterprise applications.
What is Basho Data Platform?
Basho Data Platform takes what is now a very complex set of components for developers to integrate and get running, let alone at production level scale, and makes it very simple so that the enterprise can quickly derive value from their data tier whether capturing or analyzing data in real-time. Basho Data Platform integrates Riak® KV software with Apache SparkTM, Redis, and Apache SolrTM, and controls the replication and synchronization of data between these components. This integration provides:
- Simplified complexity
- Enhanced high availability and fault tolerance across components
- Integrated real-time analytics with Apache Spark and Riak KV
- Faster application performance with integrated Redis caching and Riak KV
- Optimized search with Apache Solr and Riak KV integration
Basho Data Platform Functionality
Basho Data Platform provides a comprehensive set of data services that take the complexity out of manually deploying and managing multiple instances of Spark, Redis, and Solr tightly coupled with multiple clusters of Riak. These data services are integrated as a set of Core Services, Storage Instances, and Service Instances, which jointly form Basho Data Platform. This is illustrated in the origami graphic below.
Basho Data Platform: Core Services
Let’s look further at two of the Core Services. These are the foundation of the Basho Data Platform.
Data Replication and Synchronization
Data can be replicated and synchronized across and between Storage and Service Instances to ensure accuracy with no data loss and high availability. Apache Spark can execute queries against imported data from Riak and existing Spark RDDs, and Spark data can be persisted in Riak KV. Apache Solr indexes are also synchronized with Riak KV.
Integrated cluster management automates deployment and configuration of Riak KV, Riak S2, Spark, and Redis. Once deployed in production, automatically detect issues and restart Redis instances or Spark clusters.
You can learn more about the Core Services on our website here.
Basho Data Platform: Service Instances
With Riak 2.0 we introduced integration of Riak KV (formerly Riak) with Apache Solr. We explained this integration in the blog Write it Like Riak; Query it Like Solr. Basho Data Platform builds on this, adding support for Apache Spark and Redis. You can now also Write it Like Riak; Analyze it like Spark as well as Write it like Riak; Cache it like Redis.
Apache Spark Add-On
Allows you to automatically synchronize data between Riak KV and Spark for real-time analytics. With cluster management and Riak KV Ensemble for built in leader election, you no longer need to deploy Zookeeper
Provides integration with Redis caching to improve application performance. Data is automatically synchronized between Redis and Riak KV. Redis cache is populated from the Riak KV persistent store on a cache miss. The built-in cluster management, high availability, data synchronization, and automatic data sharding make Redis enterprise-grade.
You can learn more about the Service Instances on our website here.
Learn more about the Basho Data Platform?
The Basho Data Platform will be available in June 2015.
Vice President, Product & Marketing