Transaction Semantics in Apache Kudu

This document applies to Apache Kudu version 1.17.0. Please consult the documentation of the appropriate release that’s applicable to the version of the Kudu cluster.

This is a brief introduction to Kudu’s transaction and consistency semantics. For an in-depth technical exposition of most of what is mentioned here, and why it is correct, see the technical report [1].

Kudu’s transactional semantics and architecture are inspired by state-of-the-art systems such as Spanner [2] and Calvin [3]. Kudu builds upon decades of database research. The core philosophy is to make the lives of developers easier by providing transactions with simple, strong semantics, without sacrificing performance or the ability to tune to different requirements.

Kudu currently allows the following operations:

  • Write operations are sets of rows to be inserted, updated, or deleted in the storage engine, in a single tablet with multiple replicas. Write operations do not have separate "read sets" i.e. they do not scan existing data before performing the write. Each write is only concerned with previous state of the rows that are about to change. Writes are not "committed" explicitly by the user. Instead, they are committed automatically by the system, after completion.

  • Write transactions are groups of write operations across potentially multiple tablets that are committed atomically upon the user’s request. Once each write operation within a transaction is complete, the user sends an explicit "commit" request to make the contents of the transaction visible to readers at a single timestamp.

  • Scans are read operations that can traverse multiple tablets and read information with different levels of consistency or correctness guarantees. Scans can perform time-travel reads, i.e. the user is able to set a scan timestamp in the past and get back results that reflect the state of the storage engine at that point in time.

Before We Begin
  • The term timestamp is mentioned several times to illustrate the functionality, but timestamp is an internal concept mostly invisible to users, except when setting timestamp on a KuduScanner.

  • We generally refer to methods and classes of the C++ client. While the Java client mostly has analogous methods and classes, the exact names of the APIs may differ.

Single tablet write operations

Kudu employs Multiversion Concurrency Control (MVCC) and the Raft consensus algorithm [4]. Each write operation in Kudu must go through the tablet’s leader.

  1. The leader acquires all locks for the rows that it will change.

  2. The leader assigns the write a timestamp before the write is submitted for replication. This timestamp will be the write’s "tag" in MVCC.

  3. After a majority of replicas acknowledges the change, the actual rows are changed.

  4. After the changes are complete, they are made visible to concurrent writes and reads, atomically.

All replicas of a tablet observe the same order of operations, and if a write operation is assigned timestamp n and changes row x, a second write operation at timestamp m > n is guaranteed to see the new value of x.

This strict ordering of lock acquisition and timestamp assignment is enforced to be consistent across all replicas of a tablet through consensus. Therefore, write operations are totally ordered with regard to clock-assigned timestamps, relative to other writes in the same tablet. In other words, writes have strict-serializable semantics, though in an admittedly limited context. See this blog post for a little more context regarding what these semantics mean.

While Isolated and Durable in an ACID sense, multi-row write operations, even within a single tablet, are not fully Atomic unless they are a part of a multi-tablet write transaction. The failure of a single write in a batch operation does not roll back the operation, but produces per-row errors.

Multi-tablet write operations

Regardless of whether they are a part of a transaction, writes from a Kudu client are optionally buffered in memory until they are flushed and sent the server. When a client’s session flushes, the rows for each tablet are batched together, and sent to the tablet server that hosts the leader replica of the tablet. Outside of a transaction, each of these batches represents a single, independent write operation with its own timestamp. However, the client API provides the option to impose some constraints on the assigned timestamps and on how writes to different tablets are observed by clients.

Kudu, like Spanner, was designed to be externally consistent [5], preserving consistency when operations span multiple tablets and even multiple data centers. In practice this means that, if a write operation changes item x at tablet A, and a following write operation changes item y at tablet B, you might want to enforce that if the change to y is observed, the change to x must also be observed. There are many examples where this can be important. For example, if Kudu is storing clickstreams for further analysis, and two clicks follow each other but are stored in different tablets, subsequent clicks should be assigned subsequent timestamps so that the causal relationship between them is captured.

CLIENT_PROPAGATED Consistency

Kudu’s default external consistency mode is called CLIENT_PROPAGATED. See [1] for an extensive explanation on how it works. In brief, this mode causes writes from a single client to be automatically externally consistent. In the clickstream scenario above, if the two clicks are submitted by different client instances, the application must manually propagate timestamps from one client to the other for the causal relationship to be captured.

Timestamps between clients a and b can be propagated as follows:

Java Client

Call AsyncKuduClient#getLastPropagatedTimestamp() on client a, propagate the timestamp to client b, and call AsyncKuduClient#setLastPropagatedTimestamp() on client b.

C++ Client

Call KuduClient::GetLatestObservedTimestamp() on client a, propagate the timestamp to client b, and call KuduClient::SetLatestObservedTimestamp() on client b.

COMMIT_WAIT Consistency

Kudu also has an experimental implementation of an external consistency model used in Google’s Spanner , called COMMIT_WAIT. COMMIT_WAIT works by tightly synchronizing the clocks on all machines in the cluster. Then, when a write occurs, timestamps are assigned and the results of the write are not made visible until enough time has passed so that no other machine in the cluster could possibly assign a lower timestamp to a following write.

When using this mode, the latency of writes is tightly tied to the accuracy of clocks on all the cluster hosts, and using this mode with loose clock synchronization causes writes to take a long time to complete or even time out. See Known Issues and Limitations.

The COMMIT_WAIT consistency mode may be selected as follows:

Java Client

Call KuduSession#setExternalConsistencyMode(ExternalConsistencyMode.COMMIT_WAIT)

C++ Client

Call KuduSession::SetExternalConsistencyMode(COMMIT_WAIT)

COMMIT_WAIT consistency is considered an experimental feature. It may return incorrect results, exhibit performance issues, or negatively impact cluster stability. Use in production environments is discouraged.

Multi-tablet write transactions

Kudu provides transactionality on top of the write operations, meaning all operations that occur within a transaction abide by the same consistency behavior described above.

When a client begins a transaction, Kudu automatically assigns the transaction a unique identifier (called a "transaction ID"). The identifier can be used to create sessions to which write operations are applied, potentially across multiple clients per transaction. Write operations applied in the context of a transaction are not visible until a client commits the transaction.

Kudu exposes the following APIs to pass a transaction identifier between potentially multiple processes:

Java Client

Call KuduTransaction#serialize(…​) to get a bytes representation of the transaction ID, and call KuduTransaction#deserialize(…​) to get a KuduTransaction object.

C++ Client

Call KuduTransaction::Serialize(…​) to get a bytes representation of the transaction ID, and call KuduTransaction::Deserialize(…​) to get a KuduTransaction object.

As writes are applied in the context of the transaction, each tablet that participates in the transaction automatically registers itself as a participant, and is locked for further transactions until the transaction is complete. Per-row locks are taken as per the normal flow of a write operation, but per row locks are released upon replicating the write operation, in favor of relying on the tablet-wide lock.

If multiple transactions lock the same tablet, Kudu uses the wait-die scheme to avoid deadlocks when locking the participant: if a transaction b attempts to lock a tablet that is already locked by transaction a, if a > b (a is newer than b), transaction b continues trying to lock until it is successful (it "waits"). Otherwise, transaction b is automatically aborted, and it is up to the application to retry the transaction.

When the client commits a transaction, Kudu orchestrates a two-phase commit that assigns a "commit timestamp" to all write operations that is higher than each of their individually assigned timestamps. The mutations of the transaction are all visible to clients as of this commit timestamp. Additionally, subsequent write operations on all participants are guaranteed to be assigned timestamps higher than this timestamp. It is up to applications to ensure that all desired write operations have succeeded (i.e. did not return row errors) before committing.

As long as a transaction is expected to remain active, applications are expected to maintain at least one reference to the given transaction’s handle, each of which can be configured to automatically heartbeat to the Kudu cluster, indicating liveness of the transacting application. By default, only the first created transaction handle for a transaction will heartbeat, with the expectation that it is kept alive for the entire duration of the transaction. If only a single transaction handle is expected to be kept alive at once across multiple clients, the heartbeating can be enabled with the following calls when serializing the handle for use in other processes.

Java Client

Call KuduTransaction.SerializationOptions#setEnableKeepalive(true)

C++ Client

Call KuduTransaction::SerializationOptions::enable_keepalive(true)

Read Operations (Scans)

Scans are read operations performed by clients that may span one or more rows across one or more tablets. When a server receives a scan request, it takes a snapshot of the MVCC state and then proceeds in one of two ways depending on the read mode selected by the user. The mode may be selected as follows:

Java Client

Call KuduScannerBuilder#readMode(…​)

C++ Client

Call KuduScanner::SetReadMode()

The following modes are available in both clients:

READ_LATEST

This is the default read mode. The server takes a snapshot of the MVCC state and proceeds with the read immediately. Reads in this mode only yield 'Read Committed' isolation.

READ_AT_SNAPSHOT

In this read mode, scans are consistent and repeatable. A timestamp for the snapshot is selected either by the server, or set explicitly by the user through KuduScanner::SetSnapshotMicros(). Explicitly setting the timestamp is recommended; see Recommendations. The server waits until this timestamp is 'safe' (until all write operations that have a lower timestamp have completed and are visible). This delay, coupled with an external consistency method, will eventually allow Kudu to have full strict-serializable semantics for reads and writes. This is still a work in progress and some anomalies are still possible (see Known Issues and Limitations). Only scans in this mode can be fault-tolerant.

READ_YOUR_WRITES

This read mode relies on the state of a Kudu client to issue subsequent scan requests. When issuing a scan request in this read mode, a Kudu client provides the latest timestamp it observed so far. The server selects a timestamp higher than the timestamp provided by the client, that is also guaranteed to have all prior write operations committed and applied to the data. That translates into read-your-writes and read-your-reads behavior which is useful in scenarios where subsequent scan requests should contain the data the client has seen so far while reading and writing during its current session. KUDU-1704 could provide more details and references, if necessary. To summarize, this read mode

  • ensures read-your-writes and read-your-reads session guarantees

  • minimizes the latency caused by waiting for outstanding write operations at the server side to complete

  • doesn’t guarantee linearizability

Selecting between read modes requires balancing the trade-offs and making a choice that fits your workload. For instance, a reporting application that needs to scan the entire database might need to perform careful accounting operations, so that scan may need to be fault-tolerant, but probably doesn’t require a to-the-microsecond up-to-date view of the database. In that case, you might choose READ_AT_SNAPSHOT and select a timestamp that is a few seconds in the past when the scan starts. On the other hand, a machine learning workload that is not ingesting the whole data set and is already statistical in nature might not require the scan to be repeatable, so you might choose READ_LATEST instead for better scan performance.

Kudu also provides replica selection API for users to choose at which replica the scan should be performed:

Java Client

Call KuduScannerBuilder#replicaSelection(…​)

C++ Client

Call KuduScanner::SetSelection(…​)

This API is a means to control locality and, in some cases, latency. The replica selection API has no effect on the consistency guarantees, which will hold no matter which replica is selected.

Known Issues and Limitations

There are several gaps and corner cases that prevent Kudu from being fully strictly-serializable in some situations, at the moment. Below are the details and next, some recommendations.

Writes

  • Support for COMMIT_WAIT is experimental and requires careful tuning of the time-synchronization protocol, such as NTP (Network Time Protocol). Its use is discouraged in production environments.

  • Multi-tablet transaction support currently only allows a tablet to participate in a single transaction at a time.

  • Multi-tablet transaction support currently only guarantees "read committed" semantics.

Reads (Scans)

  • On a leader change, READ_AT_SNAPSHOT scans at a snapshot whose timestamp is beyond the last write may also yield non-repeatable reads (see KUDU-1188). See Recommendations for a workaround.

  • Impala scans are currently performed as READ_LATEST and have no consistency guarantees.

  • In AUTO_BACKGROUND_FLUSH mode, or when using "async" flushing mechanisms, writes applied to a single client session may become reordered due to the concurrency of flushing the data to the server. This may be particularly noticeable if a single row is quickly updated with different values in succession. This phenomenon affects all client API implementations, including transactional APIs. Workarounds are described in the API documentation for the respective implementations in the docs for FlushMode or AsyncKuduSession. See KUDU-1767.

  • Dirty reads (i.e. reads within an uncommitted transaction) are not currently supported.

Recommendations

  • If repeatable snapshot reads are a requirement, use READ_AT_SNAPSHOT with a timestamp that is slightly in the past (between 2-5 seconds, ideally). This will circumvent the anomaly described in Writes. Even when the anomaly has been addressed, back-dating the timestamp will always make scans faster, since they are unlikely to block.

  • If external consistency is a requirement and you decide to use COMMIT_WAIT, the time-synchronization protocol needs to be tuned carefully. Each operation will wait 2x the maximum clock error at the time of execution, which is usually in the 100 msec. to 1 sec. range with the default settings, maybe more. Thus, write operations would take at least 200 msec. to 2 sec. to complete when using the default settings and may even time out.

    • A local server should be used as a time server. We’ve performed experiments using the default NTP time source available in a Google Compute Engine data center and were able to obtain a reasonable tight max error bound, usually varying between 12-17 milliseconds.

    • The following parameters should be adjusted in /etc/ntp.conf to tighten the maximum error:

      • server my_server.org iburst minpoll 1 maxpoll 8

      • tinker dispersion 500

      • tinker allan 0

The above parameters minimize maximum error at the expense of estimated error, the latter might be orders of magnitude above it’s "normal" value. These parameters also may place a greater load on the time server, since they make the servers poll much more frequently.

References

  • [1] David Alves, Todd Lipcon and Vijay Garg. Technical Report: HybridTime - Accessible Global Consistency with High Clock Uncertainty. April, 2014. http://users.ece.utexas.edu/~garg/pdslab/david/hybrid-time-tech-report-01.pdf

  • [2] James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, J. J. Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Yasushi Saito, Michal Szymaniak, Christopher Taylor, Ruth Wang, and Dale Woodford. 2012. Spanner: Google’s globally-distributed database. In Proceedings of the 10th USENIX conference on Operating Systems Design and Implementation (OSDI'12). USENIX Association, Berkeley, CA, USA, 251-264.

  • [3] Alexander Thomson, Thaddeus Diamond, Shu-Chun Weng, Kun Ren, Philip Shao, and Daniel J. Abadi. 2012. Calvin: fast distributed transactions for partitioned database systems. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD '12). ACM, New York, NY, USA, 1-12. DOI=10.1145/2213836.2213838 http://doi.acm.org/10.1145/2213836.2213838

  • [4] Diego Ongaro and John Ousterhout. 2014. In search of an understandable consensus algorithm. In Proceedings of the 2014 USENIX conference on USENIX Annual Technical Conference (USENIX ATC'14), Garth Gibson and Nickolai Zeldovich (Eds.). USENIX Association, Berkeley, CA, USA, 305-320.

  • [5] Kwei-Jay Lin, "Consistency issues in real-time database systems," in System Sciences, 1989. Vol.II: Software Track, Proceedings of the Twenty-Second Annual Hawaii International Conference on , vol.2, no., pp.654-661 vol.2, 3-6 Jan 1989 doi: 10.1109/HICSS.1989.48069