Paper Interpretation - A Critique of ANSI SQL Isolation Levels
This article reorganizes the structure of the original paper to make it easier to understand (at least in my opinion), quotes some important content from its reference, and adds my personal understanding of some confusing details (although it may be wrong).
Terminology
- A transaction groups a set of actions that transform the database from one consistent state to another.
- A history models the interleaved execution of a set of transactions as a linear ordering of their actions, such as Reads and Writes (i.e., inserts, updates, and deletes) of specific data items.
- The actions of committed transactions in the history are represented as graph nodes. If action
op1of transactionT1conflicts with and precedes actionop2of transactionT2in the history, then the pair<op1, op2>becomes an edge in the dependency graph. - Two histories are equivalent if they have the same committed transactions and the same dependency graph.
- Phenomena are the much broader interpretations of anomalies. Anomalies are strict interpretations of phenomena.
- Let's take dirty read as an example. ANSI SQL Isolation defines dirty read as follows: Transaction
T1modifies a data item. Another transactionT2then reads the data item beforeT1executes a COMMIT or ROLLBACK. IfT1then performs a ROLLBACK,T2has read a data item that was never committed and so never really existed. - The board interpretation is
P1: w1[x]...r2[x]...((c1 or a1) and (c2 or a2) in any order), andPis the capital first character of the word phenomena. - The strict interpretation is
A1: w1[x]...r2[x]...(a1 and c2 in any order), andAis the capital first character of the word anomaly. P1is a much broader interpretation thanA1, since it prohibits all four possible commit-abort pairs by transactionsT1andT2, whileA1only prohibits two of the four.
- Let's take dirty read as an example. ANSI SQL Isolation defines dirty read as follows: Transaction
- Predicate lock is used to resolve conflicting actions that occur on a set of data items.
Isolation Levels Defined By GLPT & Locking Isolation Levels
Isolation Levels Defined By GLPT
Granularity of Locks and Degrees of Consistency in a Shared Data Base, by J.N. Gray, R.A. Lorie, G.R. Putzolu and I.L. Traiger defined Degrees of Consistency in three ways: locking, data-flow graphs, and anomalies. The anomaly definitions were too vague. The authors continue to get criticism for that aspect of the definitions in Transaction Processing: Concepts and Technique, by Jim Gray and Andreas Reuter. Only the more mathematical definitions in terms of histories and dependency graphs or locking have stood the test of time.
- Definition 1 (anomalies):
- Degree 3: Transaction
Tsees degree 3 consistency if:
Tdoes not overwrite dirty data of other transactions.Tdoes not commit any writes until it completes all its writes (i.e. until the end of transaction).Tdoes not read dirty data from other transactions.- Other transactions do not dirty any data read by
TbeforeTcompletes.- Degree 2: Transaction
Tsees degree 2 consistency if:
Tdoes not overwrite dirty data of other transactions.Tdoes not commit any writes before EOT.Tdoes not read dirty data of other transactions.- Degree 1: Transaction
Tsees degree 1 consistency if:
Tdoes not overwrite dirty data of other transactions.Tdoes not commit any writes before EOT.- Degree 0: Transaction
Tsees degree 0 consistency if:
Tdoes not overwrite dirty data of other transactions.- Definition 2 (locking):
- Degree 3: Transaction
Tobserves degree 3 lock protocol if:
Tsets a long exclusive lock on any data it dirties.Tsets a long share lock on any data it reads.- Degree 2: Transaction
Tobserves degree 2 lock protocol if:
Tsets a long exclusive lock on any data it dirties.Tsets a (possibly short) share lock on any data it reads.- Degree 1: Transaction
Tobserves degree 1 lock protocol if:
Tsets a long exclusive lock on any data it dirties.- Degree 0: Transaction
Tobserves degree 0 lock protocol if:
Tsets a (possibly short) exclusive lock on any data it dirties.- Definition 2' (locking):
- Degree 3:
Tis well formed andTis two phase.- Degree 2:
Tis well formed andTis two phase with respect to writes.- Degree 1:
Tis well formed with respect to writes andTis two phase with respect to writes.- Degree 0:
Tis well formed with respect to writes.- Definition 3 (data-flow graphs):
- Suppose that transaction
Tperforms actionaon entityeat some step in the schedule and that transactionT'performs actiona'on entityeat a later step in the schedule. Further suppose thatTdoes not equalT'. Then:
T <<< T'
- if
ais a write action anda'is a write action- or
ais a write action anda'is a read action- or
ais a read action anda'is a write actionT << T'
- if
ais a write action anda'is a write action- or
ais a write action anda'is a read actionT < T'
- if
ais a write action anda'is a write action- Let
<*be the transitive closure of<.- A schedule is degree 1 (2 or 3) consistent if and only if the relation
<*(<<*or<<<*) is a partial order.
Locking Isolation Levels
The ANSI isolation levels are related to the behavior of lock schedulers.
Table 2: Degrees of Consistency and Locking Isolation Levels defined in terms of locks
Consequently, it is necessary to differentiate isolation levels defined in terms of locks from the ANSI SQL phenomenabased isolation levels. To make this distinction, the levels in Table 2 are labeled with the "Locking" prefix, as opposed to the "ANSI" prefix of Table 1.
| Consistency Level = Locking Isolation Level | Read Locks on Data Items and Predicates (the same unless noted) | Write Locks on Data Items and Predicates (always the same) |
|---|---|---|
| Degree 0 | none required | Well-formed Writes |
| Degree 1 = Locking READ UNCOMMITTED | none required | Well-formed Writes Long duration Write locks |
| Degree 2 = Locking READ COMMITTED | Well-formed Reads Short duration Read locks (both) |
Well-formed Writes Long duration Write locks |
| Cursor Stability | Well-formed Reads Read locks held on current of cursor Short duration Read Predicate locks |
Well-formed Writes Long duration Write locks |
| Locking REPEATABLE READ | Well-formed Reads Long duration data-item Read locks Short duration Read Predicate locks |
Well-formed Writes Long duration Write locks |
| Degree 3 = Locking SERIALIZABLE | Well-formed Reads Long duration Read locks (both) |
Well-formed Writes Long duration Write locks |
The fundamental serialization theorem is that well-formed two-phase locking guarantees serializability — each history arising under two-phase locking is equivalent to some serial history. Conversely, if a transaction is not well-formed or two-phased then, except in degenerate cases, non-serializable execution histories are possible [EGLT].
We say Locking SERIALIZABLE == Serializable even though it is well known that a locking scheduler does not admit all possible Serializable histories.
ANSI SQL Isolation Levels
Phenomena & Anomaly
- Dirty Read: Transaction
T1modifies a data item. Another transactionT2then reads that data item beforeT1performs a COMMIT or ROLLBACK. IfT1then performs a ROLLBACK,T2has read a data item that was never committed and so never really existed. - Non-repeatable or Fuzzy Read: Transaction
T1reads a data item. Another transactionT2then modifies or deletes that data item and commits. IfT1then attempts to reread the data item, it receives a modified value or discovers that the data item has been deleted. - Phantom: Transaction
T1reads a set of data items satisfying some<search condition>. TransactionT2then creates data items that satisfyT1'sand commits. If T1then repeats its read with the same<search condition>, it gets a set of data items different from the first read.
| Phenomena | Histories |
|---|---|
| Dirty Read | P1: w1[x]...r2[x]...((c1 or a1) and (c2 or a2) in any order) |
A1: w1[x]...r2[x]...(a1 and c2 in any order) |
|
| Non-repeatable or Fuzzy Read | P2: r1[x]...w2[x]...((c1 or a1) and (c2 or a2) in any order) |
A2: r1[x]...w2[x]...c2...r1[x]...c1 |
|
| Phantom | P3: r1[P]...w2[y in P]...((c1 or a1) and (c2 or a2) any order) |
A3: r1[P]...w2[y in P]...c2...r1[P]...c1 |
Table 1: ANSI SQL Isolation Levels Defined in terms of the Three Original Phenomena
| Isolation Level | P1 (or A1)Dirty Read |
P2 (or A2)Fuzzy Read |
P3 (or A3)Phantom |
|---|---|---|---|
| ANSI READ UNCOMMITTED | Possible | Possible | Possible |
| ANSI READ COMMITTED | Not Possible | Possible | Possible |
| ANSI REPEATABLE READ | Not Possible | Not Possible | Possible |
| ANOMALY SERIALIZABLE | Not Possible | Not Possible | Not Possible |
Disallowing the three phenomena does not imply serializability, so Table 1 calls it ANOMALY SERIALIZABLE.
Enhanced ANSI SQL Isolation Levels
Remark 3: ANSI SQL isolation should be modified to require P0 for all isolation levels.
Dirty Write: Transaction T1 modifies a data item. Another transaction T2 then further modifies that data item before T1 performs a COMMIT or ROLLBACK. If T1 or T2 then performs a ROLLBACK, it is unclear what the correct data value should be. The broad interpretation of this is:
P0: w1[x]...w2[x]...((c1 or a1) and (c2 or a2) in any order)
Why Dirty Writes are bad?
- They can violate database consistency. Assume there is a constraint between
xandy(e.g.,x=y), andT1andT2each maintain the consistency of the constraint if run alone. However, the constraint can easily be violated if the two transactions writexandyin different orders, which can only happen if there are Dirty writes. For example consider the historyw1[x]w2[x]w2[y]c2w1[y]c1.T1's changes toyandT2's toxboth "survive". IfT1writes 1 in bothxandywhileT2writes 2, the result will bex=2,y=1violatingx=y. - Without protection from
P0, the system can't undo updates by restoring before images. Consider the history:w1[x]w2[x]a1. You don't want to undow1[x]by restoring its before-image ofx, because that would wipe outw2's update. But if you don't restore its before-image, and transactionT2later aborts, you can't undow2[x]by restoring its before-image either! (We assume that each operation remembers at most one before-image of a variable (e.g.x) written by another transaction.)
So even the weakest locking systems hold long duration write locks. Otherwise, their recovery systems would fail. Then we get remark 3.
Remark 3: ANSI SQL isolation should be modified to require P0 for all isolation levels.
Remark 4: ANSI meant to define P1, P2, and P3 instead of A1, A2, and A3.
Remark 4. Strict interpretations A1, A2, and A3 have unintended weaknesses. The correct interpretations are the Broad ones. We assume in what follows that ANSI meant to define P1, P2, and P3.
H1: r1[x=50]w1[x=10]r2[x=10]r2[y=50]c2r1[y=50]w1[y=90]c1.H1is non-serializable, because transactionT1is transferring a quantity 40 fromxtoy, maintaining a total balance of 100, butT2reads an inconsistent state where the total balance is 60. H1 does not violate any of the anomaliesA1,A2, orA3, but violates the broad interpretation ofA1, the phenomenonP1.H2: r1[x=50]r2[x=50]w2[x=10]r2[y=50]w2[y=90]c2r1[y=90]c1.H2is non-serializable, because transactionT1sees a total balance of 140.H2does not violate any of the anomaliesA1,A2, orA3, but violates the broad interpretation ofA2, the phenomenonP2.H3: r1[P]w2[insert y in P]r2[z]w2[z]c2r1[z]c1. HereT1performs a<search condition>to find the list of active employees. ThenT2performs an insert of a new active employee and then updatesz, the count of employees in the company. Following this,T1reads the count of active employees as a check and sees a discrepancy. This history is clearly not serializable, but is allowed byA3since no predicate is evaluated twice. Again, the Broad interpretation solves the problem.
Table 3: Enhanced ANSI SQL Isolation Levels Defined in terms of the four phenomena
| Isolation Level | P0Dirty Write |
P1Dirty Read |
P2Fuzzy Read |
P3Phantom |
|---|---|---|---|---|
| READ UNCOMMITTED | Not Possible | Possible | Possible | Possible |
| READ COMMITTED | Not Possible | Not Possible | Possible | Possible |
| REPEATABLE READ | Not Possible | Not Possible | Not Possible | Possible |
| SERIALIZABLE | Not Possible | Not Possible | Not Possible | Not Possible |
Difference Between Locking Isolation Levels And ANSI SQL Isolation Levels
Locking isolation levels are more isolated than the same-named ANSI levels. Locking READ UNCOMMITTED provides long duration write locking to avoid a phenomenon called "Dirty Writes," but ANSI SQL does not exclude this anomalous behavior other than ANSI SERIALIZABLE.
Remark 6. For single version histories, it turns out that the P0, P1, P2, P3 phenomena are disguised versions of locking. Thus the enhanced ANSI SQL isolation levels of Table 3 defined by these phenomena provide the same behavior as the locking isolation levels of Table 2:
- Prohibiting
P0precludes a second transaction writing an item after the first transaction has written it, equivalent to saying that long-term Write locks are held on data items (and predicates). - Prohibiting
P1is equivalent to having well-formed reads on data items. - Prohibiting
P2means long-term Read locks on data items. - Prohibiting
P3means long-term Read predicate locks.
And we infer that locking isolation levels are more isolated than the same-named ANSI levels in another way:
- Locking READ UNCOMMITTED == (ENHANCED) READ UNCOMMITTED >> ANSI READ UNCOMMITTED
- Locking READ COMMITTED == (ENHANCED) READ COMMITTED >> ANSI READ COMMITTED
- ...
Other Isolation Types
Cursor Stability
Lost Update
Lost Update: The lost update anomaly occurs when transaction T1 reads a data item and then T2 updates the data item (possibly based on a previous read), then T1 (based on its earlier read value) updates the data item and commits. In terms of histories, this is:
P4: r1[x]...w2[x]...w1[x]...c1
The problem, as illustrated in history H4, is that even if T2 commits, T2's update will be lost.
H4: r1[x=100]r2[x=100]w2[x=120]c2w1[x=130]c1
Cursor Stability
Cursor Stability is designed to prevent the lost update phenomenon.
This is an example of using MySQL Cursor from MySQL Tutorial: MySQL Cursor:
1 | |
The Cursor Stability isolation level extends READ COMMITTED locking behavior for SQL cursors by adding a new read action for FETCH from a cursor and requiring that a lock be held on the current item of the cursor.
- The lock is held until the cursor moves or is closed, possibly by a commit.
- Naturally, the Fetching transaction can update the row, and in that case a write lock will be held on the row until the transaction commits, even after the cursor moves on with a subsequent Fetch.
- The notation is extended to include,
rc, meaning read cursor, andwc, meaning write the current record of the cursor. Arc1[x]and a laterwc1[x]precludes an interveningw2[x].
P4, renamed P4C, is prevented in this case: P4C: rc1[x]...w2[x]...w1[x]...c1.
Remark 7: Strength of Cursor Stability
Remark 7: READ COMMITTED << Cursor Stability << REPEATABLE READ
The technique of putting a cursor on an item to hold its value stable can be used for multiple items, at the cost of using multiple cursors. Thus the programmer can parlay Cursor Stability to effective Locking REPEATABLE READ isolation for any transaction accessing a small, fixed number of data items. However this method is inconvenient and not at all general. Thus there are always histories fitting the P4: r1[x]...w2[x]...w1[x]...c1 (when the user does not access x carefully by Cursor and FETCH?) (and of course the more general P2: r1[x]...w2[x]...((c1 or a1) and (c2 or a2) in any order)) phenomenon that are not precluded by Cursor Stability.
Table 4.1: Enhanced ANSI SQL Isolation Levels Defined in terms of the six phenomena
| Isolation Level | P0Dirty Write |
P1Dirty Read |
P4CCursor Lost Update |
P4Lost Update |
P2Fuzzy Read |
P3Phantom |
|---|---|---|---|---|---|---|
| READ UNCOMMITTED == Degree 1 |
Not Possible | Possible | Possible | Possible | Possible | Possible |
| READ COMMITTED == Degree 2 |
Not Possible | Not Possible | Possible | Possible | Possible | Possible |
| Cursor Stability | Not Possible | Not Possible | Not Possible | Sometimes Possible | Sometimes Possible | Possible |
| REPEATABLE READ | Not Possible | Not Possible | Not Possible | Not Possible | Not Possible | Possible |
| SERIALIZABLE == Degree 3 |
Not Possible | Not Possible | Not Possible | Not Possible | Not Possible | Not Possible |
Why use the term "Sometimes Possible" other than "Possible" to describe the relationship between P4/P2 and Cursor Stability? My understanding is: Users can avoid Lost Update or Fuzzy Read by using Cursor and FETCH, but Lost Update or Fuzzy Read can still happen if users don't use Cursor and FETCH correctly.
Snapshot Isolation
Each transaction reads data from a snapshot of the (committed) data as of the time the transaction started, called its Start-Timestamp. This time may be any time before the transaction’s first Read. The transaction's writes (updates, inserts, and deletes) will also be reflected in this snapshot, to be read again if the transaction accesses (i.e., reads or updates) the data a second time. Updates by other transactions active after the transaction Start-Timestamp are invisible to the transaction.
When the transaction T1 is ready to commit, it gets a Commit-Timestamp, which is larger than any existing Start-Timestamp or Commit-Timestamp. The transaction successfully commits only if no other transaction T2 with a Commit-Timestamp in T1's execution interval [Start-Timestamp, Commit-Timestamp] wrote data that T1 also wrote. Otherwise, T1 will abort.
When T1 commits, its changes become visible to all transactions whose Start-Timestamps are larger than T1's Commit-Timestamp.
How to analyze the strength of Snapshot Isolation: Map MV histories to SV histories
We want to define equivalence so that two histories are equivalent if they have the same effects. The effects of a history are the values produced by the Write operations of unaborted transactions. Since we don't know anything about the computation that each transaction performs, we don't know much about the value written by each Write. All we do know is that it is some function of the values read by each of the transaction's Reads that preceded the Write. Thus, if each transaction's Reads read the same value in two histories, then its Writes will produce the same values in both histories. From this observation and a little careful thought, we can see that (1) if each transaction reads each of its data items from the same Writes in both histories, then all Writes write the same values in both histories, and (2) if for each data item
x, the final Write onxis the same in both histories, then the final value of all data items will be the same in both histories. And if all Writes write the same values in both histories and leave the database in the same final state, then the histories must be equivalent.Two histories are view equivalent if they have the same reads-from relationships and the same final writes.
Only a subset of serial MV histories, called 1-serial MV histories, are equivalent to serial 1V histories.
For example,
w0[x=x0]w0[y=y0]c0r1[x=x0]r1[y=y0]w1[x=x1]w1[y=y1]c1r2[x=x0]r2[y=y1]c2is a serial MV history, but not equivalent to a serial 1V history.
But are all (serial/non-serial) MV histories equivalent to (non-serial) 1V histories? I think the answer is yes, but how to prove it?
In An Investigation of Transactional Isolation Levels, by P. O'Neil, E. O'Neil, H. Berenson, P.Bernstein, J. Gray and J. Melton, we show that all Snapshot Isolation histories can be mapped to single-valued histories while preserving dataflow dependencies (the MV histories are said to be View Equivalent with the SV histories). For example:
H1: r1[x=50]w1[x=10]r2[x=10]r2[y=50]c2r1[y=50]w1[y=90]c1is a non-serializable history.T1is transferring a quantity 40 fromxtoy, maintaining a total balance of 100, butT2reads an inconsistent state where the total balance is 60.- Under Snapshot Isolation, the same sequence of actions would lead to the multi-valued history:
H1.SI: r1[x0=50]w1[x1=10]r2[x0=50]r2[y0=50]c2r1[y0=50]w1[y1=90]c1. - The MV history
H1.SIwould map to the serializable SV history:H1.SI.SV: r1[x=50]r1[y=50]r2[x=50]r2[y=50]c2w1[x=10]w1[y=90]c1.
Mapping of MV histories to SV histories is the only rigorous touchstone needed to place Snapshot Isolation in the Isolation Hierarchy.
Snapshot Isolation is non-serializable
H5: r1[x=50]r1[y=50]r2[x=50]r2[y=50]w1[y=-40]w2[x=-40]c1c2
H5 is non-serializable and has the same inter-transactional dataflows as could occur under Snapshot Isolation (there is no choice of versions read by the transactions). Here we assume that each transaction that writes a new value for x and y is expected to maintain the constraint that x + y should be positive, and while T1 and T2 both act properly in isolation, the constraint fails to hold in H5.
Strength of Snapshot Isolation
Remark 8: READ COMMITTED << Snapshot Isolation
In Snapshot Isolation, first-committer-wins precludes P0 (dirty writes), and the timestamp mechanism prevents P1 (dirty reads), so Snapshot Isolation is no weaker than READ COMMITTED. In addition, A5A is possible under READ COMMITTED, but not under the Snapshot Isolation timestamp mechanism.
Remark 9: REPEATABLE READ >><< Snapshot Isolation
- Write Skew (
A5B) obviously can occur in a Snapshot Isolation history (e.g.,H5), and in the Single Valued history interpretation we've been reasoning about, forbiddingP2also precludesA5B. Therefore Snapshot Isolation admits history anomalies that REPEATABLE READ does not.A5(Data Item Constraint Violation): SupposeC()is a database constraint between two data itemsxandyin the database. Here are two anomalies arising from constraint violation:A5A(Read Skew). Suppose transactionT1readsx, and then a second transactionT2updatesxandyto new values and commits. If nowT1readsy, it may see an inconsistent state, and therefore produce an inconsistent state as output. In terms of histories, we have the anomaly:A5A: r1[x]...w2[x]...w2[y]...c2...r1[y]...(c1 or a1).A5B(Write Skew). SupposeT1readsxandy, which are consistent withC(), and then aT2readsxandy, writesx, and commits. ThenT1writesy. If there were a constraint betweenxandy, it might be violated. In terms of histories, we have the anomaly:A5B: r1[x]...r2[y]...w1[y]...w2[x]...(c1 and c2 occur).
- Clearly neither
A5AnorA5Bcould arise in histories whereP2: r1[x]...w2[x]...((c1 or a1) and (c2 or a2) in any order)is precluded, since bothA5AandA5BhaveT2write a data item that has been previously read by an uncommittedT1. Thus, phenomenaA5AandA5Bare only useful for distinguishing isolation levels that are below REPEATABLE READ in strength.
- Snapshot Isolation cannot experience the
A3: r1[P]...w2[y in P]...c2...r1[P]...c1anomaly. A transaction reading a predicate after an update by another will always see the same old set of data items. But the REPEATABLE READ isolation level can experienceA3anomalies. Therefore REPEATABLE READ admits history anomalies that Snapshot Isolation does not.
Remark 10: ANOMALY SERIALIZABLE << SNAPSHOT ISOLATION
Snapshot Isolation histories preclude anomalies A1, A2 and A3. Therefore, in the anomaly interpretation of ANOMALY SERIALIZABLE of Table 1: ANOMALY SERIALIZABLE << SNAPSHOT ISOLATION.
Snapshot Isolation's benefits
Snapshot Isolation's benefits for update transactions is still debated. It probably isn’t good for long-running update transactions competing with high-contention short transactions, since the long-running transactions are unlikely to be the first writer of everything they write, and so will probably be aborted. (e.g., Start-Timestamp-1, Start-Timestamp-2, Commit-Timestamp-2, Start-Timestamp-1).
Table 4.2. Enhanced ANSI SQL Isolation Levels Defined in terms of the eight phenomena
| Isolation Level | P0Dirty Write |
P1Dirty Read |
P4CCursor Lost Update |
P4Lost Update |
P2Fuzzy Read |
P3Phantom |
A5ARead Skew |
A5BWrite Skew |
|---|---|---|---|---|---|---|---|---|
| READ UNCOMMITTED == Degree 1 |
Not Possible | Possible | Possible | Possible | Possible | Possible | Possible | Possible |
| READ COMMITTED == Degree 2 |
Not Possible | Not Possible | Possible | Possible | Possible | Possible | Possible | Possible |
| Cursor Stability | Not Possible | Not Possible | Not Possible | Sometimes Possible | Sometimes Possible | Possible | Possible | Sometimes Possible |
| REPEATABLE READ | Not Possible | Not Possible | Not Possible | Not Possible | Not Possible | Possible | Not Possible | Not Possible |
| Snapshot | Not Possible | Not Possible | Not Possible | Not Possible | Not Possible | Sometimes Possible (Why? A3 is not possible, but P3 is possible?) | Not Possible | Possible |
| SERIALIZABLE == Degree 3 |
Not Possible | Not Possible | Not Possible | Not Possible | Not Possible | Not Possible | Not Possible | Not Possible |
Why use the term "Sometimes Possible" other than "Possible" to describe the relationship between A5B and Cursor Stability? My understanding is: Snapshot Isolation rejects history rc1[x]...w2[x]...w1[y]...c1, so users can avoid Write Skew by using Cursor and FETCH as follows:
1 | |
But Write Skew can still happen if users don't use Cursor and FETCH correctly.
Other Multi-Version Systems
Reference
- A Critique of ANSI SQL Isolation Levels, by Hal Berenson, Phil Bernstein, Jim Gray, Jim Melton, Elizabeth O'Neil and Patrick O'Neil
- Granularity of Locks and Degrees of Consistency in a Shared Data Base, by J.N. Gray, R.A. Lorie, G.R. Putzolu and I.L. Traiger
- Concurrency Control and Recovery in Database Systems, by Philip Bernstein, Vassos Hadzilacos and Nathan Goodman
- Transaction Processing: Concepts and Technique, by Jim Gray and Andreas Reuter
- MySQL Tutorial: MySQL Cursor