Tag Archives: API

Superfeedr Deploys Riak for “The Cave”

September 12, 2013

Superfeedr provides a real-time API to any application that wants to produce (publishers) or consume (subscribers) data feeds without wasting resources or maintaining an expensive and changing infrastructure. It fetches and parses RSS or Atom feeds on behalf of its users and new entries are then pushed to subscribing applications using a webhook mechanism (PubSubHubbub) or XMPP. The Google Reader replacement is an example of a popular API built by Superfeedr that has backed up much of Google Reader.

Riak is used by Superfeedr to store the content from all feeds so users can retrieve past content (including the Google Reader API replacement), even if the feeds themselves may not include these entries anymore. This Riak datastore is referred to as “the cave.”

When Superfeedr first built “the cave” datastore, they opted for a cluster of large Redis instances (five servers with 8GB of memory each) due to its inherent speed. However, they realized that a more durable system was needed and the need to manually shard feeds across the cluster made it difficult to scale beyond storing a couple entries per feed. The scaling problem turned into an even larger issue because the average size of a stored entry was 2KB. Now, they had nearly 1,000 items per feed and 50 million feeds, translating to over 93TB of data and quickly growing.

They chose to move “the cave” to Riak due to its focus on availability (as delivering stale data was more important than delivering no data) and ease-of-scale. According to Superfeedr Founder, Julien Genestoux, “Riak solves the scalability problem elegantly. Through consistent hashing, our data is automatically distributed across the cluster and we can easily add nodes as needed.” While Riak does have a lower read performance than Redis, this proved to not be a problem as they found it easy to put caches in front of Riak if they needed to serve content faster.

Though Superfeedr found it easy to set up their Riak cluster, the default behavior for handling conflicts had to be adjusted for their use case. By working with Basho and the Riak community, they were able to find the right settings and optimize their conflict resolution algorithm. For more information on Riak’s configurable behaviors, check out our four-part blog series.

Superfeedr went into production in two phases: they started storing production data in the beginning of 2013 and began serving that data about two months later. During this period, Superfeedr was able to design their cluster infrastructure and thoroughly performance test it with actual production data.

Two types of objects are stored in Superfeedr’s Riak datastore: feeds and entries. Feeds are stored as a collection of internal feed ids, which correspond to the entries and include some meta-information, such as the title. Entries correspond to feed entries and are indexed by feedID-entryID, allowing them to store multiple entries for each feed. This indexing scheme allows entries to be retrieved, even if they lose track of the feed element, through a MapReduce job.

At write time, Superfeedr writes both the feed element and the entry element. When they query for a feed, they issue a MapReduce job to read both the feed element and the desired number of entry items. They also use a pagination mechanism to limit the resources consumed for each request, with an arbitrary limit of 50 entries.

Today, Superfeedr has served over 23 billion entries, with nearly one million more being published every hour. Their six-node Riak cluster (built on 16GB Linode slices) has allowed them to horizontally scale their cluster as their content and user base grows. “Riak is the right tool for us due to its scalability and always on availability,” said Genestoux. “We have refined it to fit our needs and can rest-assured that no data will ever be lost in our Riak ‘cave.’”

If you’re looking for a Google Reader replacement or interested in learning more about Superfeedr, check out their site: superfeedr.com/. For other examples of Riak in production, visit: basho.com/riak-users/

Basho

Analyzing Customer Support at Basho

July 30, 2012

Basho’s vision is to “be the best organization in the world.” This vision applies to every aspect of Basho as a company, including the quality of our products, our culture and of course our customer support. We strive to provide the best customer support possible.

We have chosen Zendesk as our help desk platform given its ease of usability, customization, integration with other tools, and API. In our journey to dive into our deluge of now almost 2 years of accumulated Zendesk data, I first ventured over to Zendesk’s API page to check out the existing clients. Currently, only a Ruby client is listed but a Python library also exists.

As a data scientist desiring to do some complex analytics on our customer support data, I was stunned that no one had yet developed an R wrapper for Zendesk! Having used R since 2005 (mainly to analyze genetic and genomic data) as well as plenty of its libraries and the mailing list (often), I realized this was finally my chance to give back to the open source community, which is also very much in alignment with Basho’s commitment to open-source software.

So I wrote a Zendesk API wrapper in R called zendeskR (code on github).

As of this blog posting, zendeskR v0.2 supports 6 different API calls:

These calls access the data types that I was analyzing most often, but I will add more features, as well as update and maintain the package regularly. If there is an API call you would like to see supported, please feel free to shoot me an e-mail at the address noted in the package description.

The analytic possibilities are nearly endless by having all of this data in R. Some example questions that an organization could begin to answer with this data are:

  • What is the average time to close a ticket for each customer?
  • How many comments were posted to a ticket before it was finally resolved?
  • How much does it cost to support customer X based on support and engineering time?
  • Using sentiment analysis and a text corpus, what are the most frequently used words for tickets that receive a Good Satisfaction rating versus a Poor Satisfaction rating?
  • Based on past trends, how many tickets will open this month?
  • What is a developer advocate’s ticket closing rate and satisfaction rating?

Here’s a simple charting example that displays the number of tickets and users created by month.

zendeskR

To install zendeskR from CRAN, open an R console and type:

The analytical opportunities as described above are almost endless and at Basho we have only begun to scratch the surface. Ultimately, we aim to provide the highest quality products possible coupled with the best customer support system in the world, which is partially achieved by data-driven customer support system optimization strategies. Aim high.

Tanya

When API Compatible Isn't

June 18, 2012

There’s no quicker way to learn the unspoken, de facto standard components of an API than to write a compatible replacement for an old implementation. I was reminded of this recently while tracking down an issue with Riak’s MapReduce APIs.

It has now been over a year since Riak Pipe was announced, and nearly nine months since it was released with Riak to take over MapReduce processing. The compatibility layer we wrote to run Riak MapReduce jobs on Riak Pipe accepts the same query specification, provides the same data to the same processing functions, and produces output formatted in the same manner as the previous system … unless you consider the number of results in each message to be part of the format.

The format of each message in the external streaming format is a list of one or more results, marked by which phase produced them. Riak Pipe took full advantage of this for its naive first pass at compatibility: it put one result in each message. The system it replaced, though, chose to always deliver the entirety of a reduce result in one message, and also chose to batch map results into chunks of 100.

Consuming these two correctly is no different: just accumulate all the results into bins for each phase. When time and space are concerned, however, the manner in which that binning is done makes all the difference in the world.

To wit: in a world where all reduce results are delivered in one go and map results are delivered in batches of 100, binning with orddict:append_list/3, an O(MN)* operation (M = messages, N = results), is not horrible, because M is usually small. But, in a world where each map or reduce result is delivered as its own message, M is equal to N, meaning we’re dealing with an O(N^2) operation, 100 to N times more expensive. Growing in time and space (garbage production) at the square of the size of your input is not a solution fit for large amounts of data.

A couple of quick fixes in the 1.2 release have shrunk the growth factor back down to O(N) for the Erlang Protocol Buffers client library. Similar fixes have improved the same situation in the HTTP interface as well. Or, in simpler terms, Riak’s 1.2 release speeds up non-streamed MapReduce results by leaps and bounds.

We’ve been quite happy with the Riak Pipe implementation of Riak’s MapReduce system from usability and debugging standpoints. With all of the improvements we’ve made in the last year, and the few that we have planned for the next major release after 1.2 (delivery of multiple results per message, for example), we feel confident in deprecating the previous, “legacy” system in 1.2, in preparation for removing it entirely in the following release. Transitioning to exclusively Riak Pipe for MapReduce will clean up the codebase, simplifying maintenance and make way for future growth.

-Bryan

Riak 0.10 is full of great stuff

April 23, 2010

give the people what they want

We’ve received a lot of feedback in the past few months about the ways that Riak already serves people well, and the ways that they wish it could do more for them. Our latest release is an example of our response to that feedback.

Protocol Buffers

Riak has always been accessible via a clean and easy-to-use HTTP interface. We made that choice because HTTP is unquestionably the most well-understood and well-deployed protocol for data transfer. This has paid off well by making it simple for people to use many languages to interact with Riak, to get good caching behavior, and so on. However, that interface is not optimized for maximum throughput. Each request needs to parse several unknown-length headers, for example, which imposes a bit of load when you’re pointing a firehose of data into your cluster.

For those who would rather give up some of the niceties of HTTP to get a bit more speed, we have added a new client-facing interface to Riak. That interface uses the ”protocol buffers“ encoding scheme originally created by Google. We are beginning to roll out some client libraries with support for that new interface, starting with Python and Erlang but soon to encompass several other languages. You can expect them to trickle out over the next couple of weeks. Initial tests show a nice multiple of increased throughput on some workloads when switching to the new interface. We are likely to release some benchmarks to demonstrate this sometime soon. Give it a spin and let us know what you think.

Commit Hooks

A number of users (and a few key potential customers) have asked us how to either verify some aspects of their data (schemas, etc) on the way in to Riak, or else how to take some action (on a secondary object or otherwise) as a result of having stored it. Basically, people seem to want stored procedures.

Okay, you can have them.

Much like with our map/reduce functionality, your own functions can be expressed in either Erlang or JavaScript. As with any database’s stored procedures you should make sure to make them as simple as possible or else you might place an undue load on the cluster when trying to perform a lot of writes.

Faster Key Listings

Listing of all of the keys in a Riak bucket is fundamentally a bit more of a pain than any of the per document operations as it has to deal with and coordinate responses from many nodes. However, it doesn’t need to be all that bad.

The behavior of list_keys in Riak 0.10 is much faster than in previous releases, due both to more efficient tracking of vnode coverage and also to a much faster bloom filter. The vnode coverage aspect also makes it much more tolerant of node outages than before.

If you do use bucket key listings in your application, you should always do so in streaming mode (“keys=stream” query param if via HTTP) as doing otherwise necessitates building the entire list in memory before sending it to the client.

Cleanliness and Modularity

A lot of other work went into this release as well. The internal tracking of dependencies is much cleaner, for those of you building from source (instead of just grabbing a pre-built release). We have also broken apart the core Erlang application into two pieces. There will be more written on the reasons and benefits of that later, but for now the impact is that you probably need to make some minor changes to your configuration files when you upgrade.

All in all, we’re excited about this release and hope that you enjoy using it.

- Justin

Why Vector Clocks Are Hard

April 5, 2010

A couple of months ago, Bryan wrote about vector clocks on this blog. The title of the post was “Why Vector Clocks are Easy”; anyone who read the post would realize that he meant that they’re easy for a client to use when talking to a system that implements them. For that reason, there is no reason to fear or avoid using a service that exposes the existence of vector clocks in its API.

Of course, actually implementing such a system is not easy. Two of the hardest things are deciding what  an actor is (i.e. where the incrementing and resolution is, and what parties get their own field in the vector) and how to keep vclocks from growing without bound over time.

In Bryan’s example the parties that actually proposed changes (“clients”) were the actors in the vector clocks. This is the model that vector clocks are designed for and work well with, but it has a drawback. The width of the vectors will grow proportionally with the number of clients. In a group of friends deciding when to have dinner this isn’t a problem, but in a distributed storage system the number of clients over time can be large. Once the vector clocks get that
large, they not only take up more space in disk and RAM but also take longer to compute comparisons over.

Let’s run through that same example again, but this time visualize the vector clocks throughout the scenario. If you don’t recall the whole story in the example, you should read Bryan’s post again as I am just going to show the data flow aspect of it here.

Vector Clocks by Example, in detail

Start with Alice’s initial message where she suggests Wednesday. (In the diagrams I abbreviate names, so that “Alice” will be “A” in the vclocks and so on for Ben, Cathy, and Dave.)

date = Wednesday
vclock = Alice:1

Ben suggests Tuesday:

date = Tuesday
vclock = Alice:1, Ben:1

Dave replies, confirming Tuesday:

date = Tuesday
vclock = Alice:1, Ben:1, Dave:1

Now Cathy gets into the act, suggesting Thursday:

date = Thursday
vclock = Alice:1, Cathy:1

Dave has two conflicting objects:

date = Tuesday
vclock = Alice:1, Ben:1, Dave:1

and

date = Thursday
vclock = Alice:1, Cathy:1

Dave can tell that these versions are in conflict, because neither vclock “descends” from the other. Luckily, Dave’s a reasonable guy, and chooses Thursday. Dave also created a vector clock that is a successor to all previously-seen vector clocks. He emails this value back to Cathy.

date = Thursday
vclock = Alice:1, Ben:1, Cathy:1, Dave:2

So now when Alice asks Ben and Cathy for the latest decision, the replies she receives are, from Ben:

date = Tuesday
vclock = Alice:1, Ben:1, Dave:1

and from Cathy:

date = Thursday
vclock = Alice:1, Ben:1, Cathy:1, Dave:2

From this, she can tell that Dave intended his correspondence with Cathy to override the decision he made with Ben. All Alice has to do is show Ben the vector clock from Cathy’s message, and Ben will know that he has been overruled.

That worked out pretty well.

Making it Easier Makes it Harder

Notice that even in this short and simple example the vector clock grew from nothing up to a 4-pairs mapping? In a real world scenario with long-lived data, each data element would end up with a vector clock with a length proportional to the number of clients that had ever modified it. That’s a (potentially unbounded) large growth in storage volume and computation, so it’s a good idea to think about how to prevent it.

One straightforward idea is to make the servers handling client requests be the “actors”, instead of representing the clients directly. Since any given system usually has a known bounded number of servers over time and also usually has less servers than clients, this serves to reduce and cap the size of the vclocks. I know of at least two real systems that have tried this. In addition to keeping growth under control, this approach attracts people because it means you don’t expose “hard” things like vector clocks to clients at all.

Let’s think through the same example, but with that difference, to see how it goes. We’ll assume that a 2-server distributed system is coordinating the communication, with clients evenly distributed among them. We’ll be easy on ourselves and allow for client affinity, so for the duration of the session each client will use only one server. Alice and Dave happen to get server X, and Ben and Cathy get server Y. To avoid getting too complicated here I am not going to draw the server communication; instead I’ll just abstract over it by changing the vector clocks accordingly.

We’re fine through the first few steps:

The only real difference so far is that each update increments a vector clock field named for the client’s chosen server instead of the client itself. This will mean that the number of fields needed won’t grow without bound; it will be the same as the number of servers. This is the desired effect of the change.

We run into trouble, though, when Cathy sends her update:

In the original example, this is where a conflict was created. Dave sorted out the conflict, and everything was fine. With our new strategy, though, something else happened. Ben and Cathy were both modifying from the same original object. Since we used their server id instead of their own name to identify the change, Cathy’s message has the same vector clock as Ben’s! This means that Dave’s message (responding to Ben) appears to be a simple successor to Cathy’s… and we lose her data silently!

Clearly, this approach won’t work. Remember the two systems I mentioned that tried this approach. Neither of them stuck with it once they discovered that it can be expected to silently lose updates.

For vector clocks to have their desired effect without causing accidents such as this, the elements represented by the fields in the vclock must be the real units of concurrency. In a case like this little example or a distributed storage system, that means client identifiers, not server-based ones.

Just Lose a Little Information and Everything Will Be Fine

If we use client identifiers, we’re back in the situation where vector clocks will grow and grow as more clients use a system over time. The solution most people end up with is to “prune” their vector clocks as they grow.

This is done by adding a timestamp to each field, and updating it to the current local time whenever that field is incremented. This timestamp is never used for vclock comparison — that is purely a matter of logical time — but is only for pruning purposes.

This way, when a given vclock gets too big, you can remove fields, starting at the one that was updated longest ago, until you hit a size/age threshold that makes sense for your application.

But, you ask, doesn’t this lose information?

Yes, it does — but it won’t make you lose your data. The only case where this kind of pruning will matter at all is when a client holds a very old copy of the unpruned vclock and submits data descended from that. This will create a sibling (conflict) even though you might have been able to resolve it automatically if you had the complete unpruned vclock at the server. That is the tradeoff with pruning: in exchange for keeping growth under control, you run the chance of occasionally having to do a “false merge”… but you never lose data quietly, which makes this approach unequivocally better than moving the field identifiers off of the real client and onto the server.

Review

So, vclocks are hard: even with perfect implementation you can’t have perfect information about causality in an open system without unbounded information growth. Realize this and design accordingly.

Of course, that is just advice for people building brand new distributed systems or trying to improve existing ones. Using a system that exposes vclocks is still easy.

-Justin

Using Innostore with Riak

February 22, 2010

Innostore is an Erlang application that provides an API for storing and retrieving key/value data using the InnoDB storage system. This storage system is the same one used by MySQL for reliable, transactional data storage. It’s a proven, fast system and perfect for use with Riak if you have a large amount of data to store. Let’s take a look at how you can use Innostore as a backend for Riak.

(Note: I assume that you have successfully built an instance of Riak for your platform. If you built Riak from source in ~/riak, then set $RIAK to ~/riak/rel/riak.”)

We first get started by grabbing a stable release of Innostore. You’ll need to download the source for a release from: https://github.com/basho/innostore

Looking in the “Tags & snapshots” section, you should download the source for the highest available RELEASE_* tag. In my case, RELEASE_4 is the most recent release, so I’ll grab the bz2 file associated with it.

Once I have the source code, it’s time to unpack it and build:

$ tar -xjf innostore-RELEASE_4.tar.bz2

$ cd innostore

$ make

Depending on the speed of the machine you are building on, this may take a few minutes to complete. At the end, you should see a series of unit tests run, with the output ending:
=======================================================

All 7 tests passed.

100222 7:43:58 InnoDB: Shutdown completed; log sequence number 90283

Cover analysis: /Users/dizzyd/src/public/innostore/.eunit/index.html

Now that we have successfully built Innostore, it’s time to install it into the Riak distribution:

$ ./rebar install target=$RIAK/lib

If you look in the $RIAK/lib directory now, you should see the innostore-4 directory alongside a bunch of .ez files and other directories which compose the Riak release.

Now, we need to tell Riak to use the Innostore driver as a backend. Make sure Riak is not running. Edit $RIAK/etc/app.config, setting the value for “storage_backend” as follows:

{storage_backend, innostore_riak},

In addition, append the configuration for the Innostore application after the SASL section:

{sasl, [ ....

]}, %% < -- make sure you add a comma here!!

{innostore, [

{data_home_dir, "data/innodb"}, %% Where data files go

{log_group_home_dir, "data/innodb"}, %% Where log files go

{buffer_pool_size, 2147483648} %% 2G in-memory buffer in bytes

]}

You may need to adjust the directories for your data_home_dir and log_group_home_dirs to match where you want the inno data and log files to be stored. If possible, make sure that the data and log dirs are on separate disks — this can yield much better performance.

Once you’ve completed the changes to $RIAK/etc/app.config, you’re ready to start Riak:

$ $RIAK/bin/riak console

As it starts up, you should see messages from Inno that end with something like:

100220 16:36:58 InnoDB: highest supported file format is Barracuda.

100220 16:36:58 Embedded InnoDB 1.0.3.5325 started; log sequence number 45764

That’s it! You’re ready to start using Riak for storing truly massive amounts of data.

Enjoy,

Dave Smith

Calling all Rubyists – Ripple has Arrived!

February 11, 2010

The Basho Dev. Team has been very excited about working with the Ruby community for some time. The only problem was we were heads down on so many other projects that it was hard to make any progress. But, even with all that work on our plate, we were committed to showing some love to Rubyists and their frameworks.

Enter Sean Cribbs. As Sean details in his latest blog post, Basho and the stellar team at Sonian made it possible for him to hack on Ripple, a one-of-a-kind client library and object mapper for Riak. The full feature set for Ripple can be found on Sean’s blog, but highlights include a DataMapper-like API, an easy builder-style interface to Map/Reduce, and near-term integration with Rails 3.

And, in case you need any convincing that you should consider Riak as the primary datastore for your next Rails app, check out Sean’s earlier post, “Why Riak should power your next Rails app.”

So, if you’ve read enough and want to get your hands on Ripple, go check it out on GitHub.

If you don’t have Riak downloaded and built yet, get on it.

Lastly, you are going to be seeing a lot more Riak in your Ruby. So stay tuned because we have some big plans.

Best,

Mark

The Release Riak 0.8 and JavaScript Map/Reduce

February 3, 2010

We are happy to announce the release of Riak 0.8 available for download immediately. Riak 0.8 features a number of enhancements to the core map/reduce machinery that will make Riak more accessible to a wider audience. The biggest enhancement is the ability to write map/reduce queries in JavaScript. We’re using our erlang_js project to integrate Mozilla’s Spidermonkey engine directly into Riak to keep overhead to a minimum.

We’ve also built a spiffy REST API for submitting map/reduce queries. Queries are described in JSON and POST-ed to the Riak server. Results are sent back as JSON for your processing pleasure. And, the REST interface supports streaming results for large result sets, too.

To kick it all off, we’ve put together a short screencast demonstrating how to use Riak’s flashy new features. You can watch it below, or view it on Vimeo. There’s also a slew of bug fixes and optimizations included in Riak 0.8. See the release notes for all the juicy details.

Download and enjoy!

View on Vimeo