Tag Archives: schema

Webinar Recap and Q&A – Schema Design for Riak

December 8, 2010

Thank you to all who attended the webinar yesterday. The turnout was great, and the questions at the end were also very thoughtful. Since I didn’t get to answer very many, I’ve reviewed all of the questions below, in no particular order.

Q: Can you touch on upcoming filtering of keys prior to map reduce? Will it essentially replace the need for one to explicitly name the bucket/key in a M/R job? Does it require a bucket list-keys operation?

Key filters, in the upcoming 0.14 release, will allow you to logically select a population of keys from a bucket before running them through MapReduce. This will be faster than a full-bucket map since it only loads the objects you’re really interested in (the ones that pass the filter). It’s a great way to make use of meaningful keys that have structure to them. So yes, it does require an list-keys operation, but doesn’t replace the need to be explicit about which keys to select; there are still many useful queries that can be done when the keys are known ahead of time.

For more information on key-filters, see Kevin’s presentation on the upcoming MapReduce enhancements.

Q: How can you validate that you’ve reached a good/valid KV model when migrating a relational model?

The best way is to try out some models. The thing about schema design for Riak that turns your process on its head is that you design for optimizing queries, not for optimizing the data model. If your queries are efficient (single-key lookup as much as possible), you’ve probably reached a good model, but also weigh things like payload size, cost of updating, and difficulty manipulating the data in your application. If your design makes it substantially harder to build your application than a relational design, Riak may not be the right fit.

Q: Are there any “gotchas” when thinking of a bucket as we are used to thinking of a table?

Like tables, buckets can be used to group similar data together. However, buckets don’t automatically enforce data structure (columns with specified types, referential integrity) like relational tables do; that part is still up to your application. You can, however, add precommit hooks to buckets to perform any data validation that your application shouldn’t handle.

Q: How would you create a ‘manual index’ in Riak? Doesn’t that need to always find unique keys?

One basic way to structure a manually-created index in Riak is to have a bucket specifically for the index. Keys in this bucket correspond to the exact value you are indexing (for fuzzy or incomplete values,
use Riak Search). The objects stored at those keys have links or lists of keys that refer to the original object(s). Then you can find the original simply by following the link or using MapReduce to extract and find the related keys.

The example I gave in the webinar Q&A was indexing users by email. To create the index, I would use a bucket named users_by_email. If I wanted to lookup my own user object by email, I’d try to fetch the object
at users_by_email/sean@basho.com, then follow the link in it (something like </riak/users/237438-28374384-128>; riaktag="indexed") to find the actual data.

Whether those index values need to be unique is up to your application to design and enforce. For example, the index could be storing links to blog posts that have specific tags, in which case the index need not be unique.

To create the index, you’ll either have to perform multiple writes from your application (one for the data, one for the index), or add a commit hook to create and modify it for you.

Q: Can you compare/contrast buckets w/ Cassandra column families?

Cassandra has a very different data model from Riak, and you’ll want to consult with their experts to get a second opinion, but here’s what I know. Column families are a way to group related columns together that you will always want to retrieve together, and is something that you design up-front (it requires restarting the cluster for changes to take effect). It’s the closest thing to a relational table that Cassandra has.

Although you do use buckets to group similar data items, in contrast, Riak’s buckets:

  1. Don’t understand or enforce any internal structure of the values,
  2. Don’t need to be created or designed ahead of time, but pop into existence when you first use them, and
  3. Don’t require a restart to be used.

Q: How would part sharing be achieved? (this is a reference to the example given in the webinar, Radiant CMS)

Radiant shares content parts only when specified by the template language, and always by inheritance from ancestor pages. So if the layout contained <r:content part="sidebar" inherit="true" />, then if the currently rendering page doesn’t have that content part, it will look up the hierarchy until it finds it. This is one example of why it’s so important to have an efficient way to traverse the site hierarchy, and why I presented so many options.

Q: What is the max number of links an object can have for Link Walking?

There’s no cut-and-dry answer for this. Theoretically, you are limited only by storage space (disk and RAM) and the ability to retrieve the object from the desired interface. In a practical sense this means that the default HTTP interface limits you to around 100,000 links on a single object (based on previous discussions of the limits of HTTP packets and header lengths). Still, this is not going to be reasonable to deal with in your application. In some applications we’ve seen links on the order of hundreds per object negatively impact link-walking performance. If you need to have that many, you’ll be better off exploring other designs.

Again, thanks for attending! Look for our next webinar coming in about month.

Sean, Developer Advocate

Free Webinar – Schema Design for Riak – Dec 7 at 2PM Eastern

December 1, 2010

Moving applications to Riak involves a number of changes from the status quo of RDBMS systems, one of which is taking greater control over your schema design. You’ll have questions like: How do you structure data when you don’t have tables and foreign keys? When should you denormalize, add links, or create MapReduce queries? Where will Riak be a natural fit and where will it be challenging?

We invite you to join us for a free webinar on Tuesday, December 7 at 2:00PM Eastern Time to talk about Schema Design for Riak. We’ll discuss:

  • Freeing yourself of the architectural constraints of the “relational” mindset
  • Gaining a fuller understanding of your existing schema and its queries
  • Strategies and patterns for structuring your data in Riak
  • Tradeoffs of various solutions

We’ll address the above topics and more as we design a new Riak-powered schema for a web application currently powered by MySQL. The presentation will last 30 to 45 minutes, with time for questions at the end.

If you missed the previous version of this webinar in July, here’s your chance to see it! We’ll also use a different example this time, so even if you attended last time, you’ll probably learn something new.

Fill in the form below if you want to get started building applications on top of Riak!

Sorry, registration is closed! Video of the presentation will be posted on Vimeo after the webinar has ended.

The Basho Team

Webinar Recap – Schema Design for Riak

July 7, 2010

Thank you to all who attended the webinar yesterday. The turnout was great, and the questions at the end were also very thoughtful. Since I didn’t get to answer very many, I’ve reviewed the questions below, in no particular order. If you want to review the slides from yesterday’s presentation, they’re on Slideshare.

Q: You say listing keys is expensive. How are Map phases affected? Does the number of keys in a bucket have an effect on the expense of the operation? (paraphrased)

Listing keys (for a single bucket, there is no analog for the entire system) requires traversing the entire keyspace, even examining keys that don’t belong to the requested bucket. If your Map/Reduce query uses a whole bucket as its inputs, it will be nearly as expensive as listing keys back to the client; however, Map phases are executed in parallel on the nodes where the data lives, so you get the full benefits of parallelism and data-locality when it executes. The expense of listing keys is taken before any Map phase begins.

It bears reiterating that the expense of listing keys is proportional to the total number of keys stored (regardless of bucket). If your bucket has only 10 keys and you know what they are, it will probably be more efficient to list them as the inputs to your Map/Reduce query than to use the whole bucket as an input.

Q: How do you recommend modeling relationships that require a large number of associations (thousands or millions)?

This is difficult to do, and I won’t say there’s an easy or best answer. One idea that came up in the IRC
room after the webinar was building a B-tree-like data-structure that could be grown to fit the number of associations. This solves the one-to-many relationship, but will require extra handling and care on the part of your application. In some cases, where you only need to know membership in the relationship, a bloom filter might be appropriate. If you must model lots of highly-connected data, consider throwing a graph database in the mix. Riak is not going to fit all use-cases, some models will be awkward.

Q: My company provides a Java web application and analytics solution that uses JDO to persist to and query from a relational database. Where would I start in integrating with Riak?

Since I haven’t done Java in a serious way for a long time, I can’t speak to the specifics of JDO, or how you might work on migrating away from it. However, I have found that most ORMs hide things from the
developer that he/she should really be aware of — how the mapping is performed, what queries are executed, etc. You’ll likely have to look into the guts of how JDO persists and retrieves objects from the database, then step back and reevaluate what your top queries are and how Riak can help improve or simplify those operations. This is all in the theme of the webinar: Know your data!

Q: Is the source code for the example application and schema design available? (paraphrased)

No, there isn’t any sample code yet. You can play with the existing application (Lowdown) at lowdownapp.com. The other authors and I are seeking a few people to take over its development, and the initial group we contacted have indicated it will be open-sourced.

Q: Is there an way to get notified on changes in a bucket?

That’s not built-in to Riak. However, you could write a post-commit hook in Erlang that pushes a notification to RabbitMQ, for example, then have the interested parties consume messages from that queue.

Q: What mechanism does Riak have to deal with the unique user issue?

Riak has neither write locks nor transactions. There is no way to absolutely guarantee uniqueness without introducing an intermediary that acts as a single-arbiter (and point-of-failure). However, in cases when you aren’t experiencing high write-concurrency on the data in question there are a few things you can do to simulate the uniqueness constraint:

  • Check for existence of the key before writing. In HTTP, this is as simple as a HEAD request. If the response is 404 Not Found, the object probably doesn’t exist.
  • Use a conditional PUT (in HTTP) when creating the object. The If-None-Match: * header should prevent you from blindly overwriting an existing key.

Neither of these solutions are bullet-proof because all operations happen in Riak asynchronously. Remember that it’s eventually consistent, meaning that not all parts of the system may agree at all times, but they will converge on a single state over time. There will be corner-cases where a key doesn’t exist when you check for it, the write via the conditional request succeeds, and you still end up creating an object in conflict. Caveat emptor.

Q: Are the intermediate results of Link and Map phases cached?

Yes, the results of both map and link phases are cached in a pretty naive LRU. The development team has plans to improve its behavior in future versions of Riak.

Q: Could you comment on commit hooks and what place they have, if any, in riak schema design? Would it make sense to use hooks to build an index e.g. keys in a bucket?

Yes, commit hooks are very useful in schema design. For example, you could use a pre-commit hook to validate the format of data before it’s stored. You could use post-commit hooks to send the data to external services (see above) or, as you suggest, build an index in another bucket. Building a secondary index reliably is complicated though, and it’s something I want to work on over the next few months.

Q: So if you have allow_mult=false are there cases where riak will return a conflict 409? Is the default that last write wins?

Riak never returns a 409 Conflict status from the HTTP interface on writes. If you supply a conditional header (If-Match, for example) you might get a 412 Precondition Failed response if the ETag of the object to be modified doesn’t match the header. In general, it is Riak’s policy to accept writes regardless of the internal state of the object.

The “last write wins” behavior comes in two flavors: “clobbering” writes, and softer “show me the latest one” reads. The latter is the default behavior, in which siblings might occur internally (and the vector clock grown) but not exposed to the client; instead it returns the sibling with the latest timestamp at read/GET time and “throws away” new writes that are based on older (ancestor) vclocks. The former actually ignores vector clocks for the specified bucket, providing no guarantees of causal ordering of writes. To turn this behavior on, set the last_write_wins bucket property to true. Except in the most extreme cases where you don’t mind clobbering things that were written since the last time you read, we recommend using the default behavior. If you set allow_mult=true, conflicting writes (objects with divergent vector clocks, not traceable descendents) will be exposed to the client with a 300 Multiple response.

Again, thanks for attending! Look for our next webinar in about two weeks.

Sean

Free Webinar – Schema Design for Riak – July 8th at 2PM Eastern

June 30, 2010

Moving applications to Riak involves a number of changes from the status quo of RDBMS systems, one of which is taking greater control over your schema design. You’ll have questions like: How do you structure data when you don’t have tables and foreign keys? When should you denormalize, add links, or create map-reduce queries? Where will Riak be a natural fit and where will it be challenging?

We invite you to join us for a free webinar on Thursday, July 8 at 2:00PM Eastern Time to talk about Schema Design for Riak. We’ll discuss:

  • Freeing yourself of the architectural constraints of the “relational” mindset
  • Gaining a fuller understanding of your existing schema and its queries
  • Strategies and patterns for structuring your data in Riak
  • Tradeoffs of various solutions

We’ll address the above topics and more as we design a new Riak-powered schema for a web application currently powered by MySQL. The presentation will last 30 to 45 minutes, with time for questions at the end.

Fill in the form below if you want to get started building applications on top of Riak!

Sorry, registration is closed.

The Basho Team

Practical Map-Reduce – Forwarding and Collecting

This post is an example of how you can solve a practical querying problem in Riak with Map-Reduce.

The Problem

This query problem comes via Jakub Stastny, who is building a task/todolist app with Riak as the datastore. The question we want to answer is: for the logged-in user, find all of the tasks and their associated “tags”. The schema looks kind of like this:

Each of our domain concepts has its own bucket – users, tasks and tags. User objects have links to their tasks, tasks link to their tags, which also link back to the tasks. We’ll assume the data inside each object is JSON.

The Solution

We’re going to take advantage of these features of the map-reduce interface to make our query happen:

1. You can use link phases where you just need to follow links on an object.
2. Inputs to map phases can include arbitrary key-specific data.
3. You can have as many map, reduce, and link phases as you want in the same job.

Let’s construct the JSON job step-by-step, starting with the input – the user object.

Next, we’ll use a link phase to find my tasks.

Now that we’ve got all of my tasks, we’ll use this map function to extract the relevant data we need from the task — including the links to its tags — and pass them along to the next map phase as the keydata. Basically it reads the task data as JSON, filters the object’s links to those only in the “tags” bucket, and then uses those links combined with our custom data to feed the next phase.

Here’s the phase that uses that function:

Now in the next map phase (which operates over the associated tags that we discovered in the last phase) we’ll insert the tag object’s parsed JSON contents into the “tags” list of the keydata object that was passed along from the previous phase. That modified object will become the input for our final reduce phase.

Here’s the phase specification for this phase (basically the same as the previous except for the function):

Finally, we have a reduce phase to collate the resulting objects with their included tags into single objects based on the task name.

Our final phase needs to return the results, so we add *”keep”:true* to the phase specification:

Here’s the final format of our Map/Reduce job, with indentation for clarity:

I input some sample data into my local Riak node, linked it up according to the schema described above and this is what I got:

Conclusion

What I’ve shown you above is just a taste of what you can do with Map/Reduce in Riak. If the above query became common in your application, you would want to store those phase functions we created as built-ins and refer to them by name rather than by their source. Happy querying!

Sean

Schema Design in Riak – Relationships

March 25, 2010

In the previous installment we looked at how your reasons for picking Riak affect how your schema should be designed, and how you might go about structuring your data at the individual object level. In this post we’ll look at how to design relationships on top of Riak.

Relationships? I thought Riak was key-value.

An even mildly-complicated application is going to have more than one type of data to store and manipulate. Those data are not islands, but have relationships to one another that make your application and its domain more than just arbitrary lists of things.

Yes, at its core, Riak is a key-value store or distributed hash-table. Because key-value stores are not very sophisticated at modeling more complicated relationships, Riak adds the concept of links between objects that are qualified by “tags” and can be easily queried using “link-walking”.

Now, the knee-jerk reaction would be to start adding links to everything. I want to show you that the problem of modeling relationships is a little more nuanced than just linking everything together, and that there are many ways to express the same relationship — each having tradeoffs that you need to consider.

Key correspondence

The easiest way to establish a relationship is to have some correspondence between the keys of the items. This works well for one-to-one and some one-to-many relationships and is easy to understand.


In the simplest case, your related objects have the same key, but different buckets. Lookups on this type of relationship are really efficient, you just change the bucket name to find the other item. How is this useful? Why not just store them together? One of the objects may get updated or read more often than the other. Their data types might be incompatible (a user profile and its avatar, for example). In either case, you get the benefit of the separation and fast access without needing link-walking or map-reduce; however, you really can only model one-to-one relationships with this pattern.

For one-to-many types of relationships, you might prefix or otherwise derive the key of the dependent (many) side of the relationship with the key of the parent side. This could be done as part of the bucket name, or as a simple prefix to the key. There are a couple of important tradeoffs to consider here. If you choose the bucket route, the number of buckets might proliferate in proportion to your data quantity. If you choose to prefix the key, it will be easy to find the parent object, but may be more difficult to find the dependent objects. The same reasons as having equivalent keys apply here — tight cohesion between the objects but different access patterns or internal structure.

De-normalization / Composition

A core principle in relational schema design is factoring your relations so that they achieve certain “normal forms”, especially in one-to-many sorts of relationships. This means that if your domain concept “has” any number of something else, you’ll make a separate table for that thing and insert a foreign key that points back to the owner. De-normalizing (or composing) your data often makes sense, both for the sake of performance and for ease of modeling.

How does this work? Let’s say your relational database had tables for people and for addresses. A person may have any number of addresses for home, work, mailing, etc, which are related back to the person by way of foreign key. In Riak, you would give your person objects an “addresses” attribute, in which you would store a list or hash of their addresses. Because the addresses are completely dependent on the person, they can be a part of the person object. If addresses are frequently accessed at the same time as the person, this also results in fewer requests to the database.

Composition of related data is not always the best answer, even when a clear dependency exists; take for instance, the Twitter model. Active users can quickly accrue thousands of tweets, which need to be aggregated in different combinations across followers’ timelines. Although the tweet concept is dependent on the user, it has more conceptual weight than the user does and needs to stand by itself. Furthermore, performance would suffer if you had to pull all of a user’s tweets every time you wanted to see their profile data.

Good candidates for composition are domain concepts that are very dependent on their “owner” concept and are limited in number. Again, knowing the shape of your data and the access pattern are essential to making this decision.

Links

Links are by far the most flexible (and popular) means for modeling relationships in Riak, and it’s obvious to see why. They hold the promise of giving a loose graph-like shape to your relatively flat data and can cleanly represent any cardinality of relationship. Furthermore, link-walking is a really attractive way to quickly do queries that don’t need the full power of map-reduce (although Riak uses map-reduce behind the scenes to traverse the links). To establish a relationship, you simply add a link on the object to the other object.

Intrinsically, links have no notion of cardinality; establishing that is entirely up to your application. The primary difference is whether changing an association replaces or adds/removes links from the associated objects. Your application will also have to do some accounting about which objects are related to other objects, and establish links accordingly. Since links are uni-directional, stored on the source, and incoming links are not automatically detected, your application will need to add the reciprocal links when traversals in both directions are needed (resulting in multiple PUT operations). In some cases, especially in one-to-many relationships where the “many” side is not accessed independently, you might not need to establish the reciprocal link. Knowing how your data will be accessed by the application — both reads and writes — will help you decide.

Links have a few other limitations that you will need to consider. First, although the tag part of the link can technically be any Erlang term, using anything other than a binary string may make it difficult for HTTP-based clients to deal with them. Second, since links are stored directly with the object in its metadata, objects that have many links will be slower to load, store, and perform map-reduce queries over. In the HTTP/REST interface as well, there are practical limitations simply because of the method of transport. At the time of writing, mochiweb — the library that is the foundation of webmachine, Riak’s HTTP interface — uses an 8K buffer for incoming requests and limits the request to 1000 header fields (including repeated headers). This means that each Link: header you provide needs to be less than 8K in length, and assuming you use the typical headers when storing, you can have at most about 995 individual Link: headers. By the time you reach the approximately 150,000 links that that provides, you’ll probably want to consider other options anyway.

Hybrid solutions

At this point, you might be wondering how your data is going to fit any of these individual models. Luckily, Riak is flexible, so you can combine them to achieve a schema that best fits your need. Here’s a few possibilities.

Often, either the number of links on an object grows large or the need to update them independently of the source object arises. In our Twitter example, updating who you follow is a significantly different operation from updating your user profile, so it makes sense to store those separately, even though they are technically a relationship between two users. You might have the user profile object and list of followed users as key-correspondent objects, such as users/seancribbs and following/seancribbs (not taking into account your followers, of course).

In relational databases you typically use the concept of a “join table” to establish many-to-many relationships. The intermediary table holds foreign keys back to the associated objects, and each row represents one individual association, essentially an “adjacency list”. As your domain becomes more complex and nuanced, you might find that these relationships represented by join tables become domain concepts in their own right, with their own attributes. In Riak, you might initially establish many-to-many relationships as links on both sides. Similarly to the “join table” issue, the relationship in the middle might deserve an object of its own. Some examples that might warrant this design: qualified relationships (think “friends” on Facebook, or permissions in an ACL scheme), soft deletion, and history (tracking changes).

Key correspondence, composition and linking aren’t exclusive ways to think of relationships between data in your application, but tools to establish the semantics your domain requires. I’ve said it many times already, but carefully evaluate the shape of your data, the semantics you want to impose on it, and the operational profile of your application when choosing how you structure your data in Riak.

Sean Cribbs

Schema Design in Riak – Introduction

March 19, 2010

One of the challenges of switching from a relational database (Oracle, MySQL, etc.) to a “NoSQL” database like Riak is understanding how to represent your data within the database. This post is the beginning of a series of entries on how to structure your data within Riak in useful ways.

Choices have consequences

There are many reasons why you might choose Riak for your database, and I’m going to explain how a few of those reasons will affect the way your data is structured and manipulated.

One oft-cited reason for choosing Riak, and other alternative databases, is the need to manage huge amounts of data, collectively called “Big Data”. If you’re storing lots of data, you’re less likely to be doing online queries across large swaths of the data. You might be doing real-time aggregation in addition to calculating longer-term information in the background or offline. You might have one system collecting the data and another processing it. You might be storing loosely-structured information like log data or ad impressions. All of these use-cases call for low ceremony, high availability for writes, and little need for robust ways of finding data — perfect for a key/value-style scheme.

Another reason one might pick Riak is for flexibility in modeling your data. Riak will store any data you tell it to in a content-agnostic way — it does not enforce tables, columns, or referential integrity. This means you can store binary files right alongside more programmer-transparent formats like JSON or XML. Using Riak as a sort of “document database” (semi-structured, mostly de-normalized data) and “attachment storage” will have different needs than the key/value-style scheme — namely, the need for efficient online-queries, conflict resolution, increased internal semantics, and robust expressions of relationships.

The third reason for choosing Riak I want to discuss is related to CAP – in that Riak prefers A (Availability) over C (Consistency). In contrast to a traditional relational database system, in which transactional semantics ensure that a datum will always be in a consistent state, Riak chooses to accept writes even if the state of the object has been changed by another client (in the case of a race-condition), or if the cluster was partitioned and the state of the object diverges. These architecture choices bring to the fore something we should have been considering all along — how should our applications deal with inconsistency? Riak lets you choose whether to let the “last one win” or to resolve the conflict in your application by automated or human-assisted means.

More mindful domain modeling

What’s the moral of these three stories? When modeling your data in Riak, you need to understand better the shape of your data. You can no longer rely on normalization, foreign key constraints, secondary indexes and transactions to make decisions for you.

Questions you might ask yourself when designing your schema:

  • Will my access pattern be read-heavy, write-heavy, or balanced?
  • Which datasets churn the most? Which ones require more sophisticated conflict resolution?
  • How will I find this particular type of data? Which method is most efficient?
  • How independent/interrelated is this type of data with this other type of data? Do they belong together?
  • What is an appropriate key-scheme for this data? Should I choose my own or let Riak choose?
  • How much will I need to do online queries on this data? How quickly do I need them to return results?
  • What internal structure, if any, best suits this data?
  • Does the structure of this data promote future design modifications?
  • How resilient will the structure of the data be if requirements change? How can the change be effected without serious interruption of service?

I like to draw up my domain concepts on a pad of unlined paper or a whiteboard with boxes and arrows, then figure out how they map onto the database. Ultimately, the concepts define your application, so get those solid before you even worry about Riak.

Thinking non-relationally

Once you’ve thought carefully about the questions described above, it’s time think about how your data will map to Riak. We’ll start from the small-scale in this post (single domain concepts) and work our way out in future installments.

Internal structure

For a single class of objects in your domain, let’s consider the structure of that data. Here’s where you’re going to decide two interrelated issues — how this class of data will be queried and how opaque its internal structure will be to Riak.

The first issue, how the data will be queried, depends partly on how easy it is to intuit the key of a desired object. For example, if your data is user profiles that are mostly private, perhaps the user’s email or login name would be appropriate for the key, which would be easy to establish when the user logs in. However, if the key is not so easy to determine, or is arbitrary, you will need map-reduce or link-walking to find it.

The second issue, how opaque the data is to Riak, is affected by how you query but also by the nature of the data you’re storing. If you need to do intricate map-reduce queries to find or manipulate the data, you’ll likely want it in a form like JSON (or an Erlang term) so your map and reduce functions can reason about the data. On the other hand, if your data is something like an image or PDF, you don’t want to shoehorn that into JSON. If you’re in the situation where you need both a form that’s opaque to Riak, and to be able to reason about it with map-reduce, have your application add relevant metadata to the object. These are created using X-Riak-Meta-* headers in HTTP or riak_object:update_metadata/2 in Erlang.

Rule of thumb: if it’s an abstract datatype, use a map-reduce-friendly format like JSON; if it’s a concrete form, use its original representation. Of course, there are exceptions to every rule, so think carefully about your modeling problem.

Consistency, replication, conflict resolution

The second issue I would consider for each type of data is the access pattern and desired level of consistency. This is related to the questions above of read/write loads, churn, and conflicts.

Riak provides a few knobs you can turn at schema-design time and at request-time that relate to these issues. The first is allow_mult, or whether to allow recording of divergent versions of objects. In a write-heavy load or where clients are updating the same objects frequently, possibly at the same time, you probably want this on (true), which you can change by setting the bucket properties. The tradeoffs are that the vector clock may grow quickly and your application will need to decide how to resolve conflicts.

The second knob you can turn is the n_val, or how many replicas of each object to store, also a per-bucket setting. The default value is 3, which will work for many applications. If you need more assurance that your data is going to withstand failures, you might increase the value. If your data is non-critical or in large chunks, you might decrease the value to get greater performance. Knowing what to choose for this value will depend on an honest assessment of both the value of your data and operational concerns.

The third knob you can turn is per-request quorums. For reads, this is the R request parameter: how many replicas need to agree on the value for the read to succeed (the default is 2). For writes, there are two parameters, W and DW. W is how many replicas need to acknowledge the write request before it succeeds (default is 2). DW (durable writes) is how many replica backends need to confirm that the write finished before the entire write succeeds (default is 0). If you need greater consistency when reading or writing your data, you’ll want to increase these numbers. If you need greater performance and can sacrifice some consistency, decrease them. In any case, your R, W, and DW values must be smaller than n_val if you want the request to succeed.

What do these have to do with your data model? Fundamentally understanding the structure and purpose of your data will help you determine how you should turn these knobs. Some examples:

  • Log data: You’ll probably want low R and W values so that writes are accepted quickly. Because these are fire-and-forget writes, you won’t need allow_mult turned on. You might also want a low n_val, depending on how critical your data is.
  • Binary files: Your n_val is probably the most significant issue here, mostly depending on how large your files are and how many replicas of them you can tolerate (storage consumption).
  • JSON documents (abstract types): The defaults will work in most cases. Depending on how frequently the data is updated, and how many you update within a single conceptual operation with the application, you may want to enable allow_mult to prevent blind overwrites.

Sean Cribbs