Skip to content

Commit 2784fe8

Browse files
committed
TVar: format documentation.
1 parent bd345ad commit 2784fe8

File tree

1 file changed

+78
-27
lines changed

1 file changed

+78
-27
lines changed

doc/tvar.md

Lines changed: 78 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
`TVar` and `atomically` implement a software transactional memory. A `TVar` is a single
2-
item container that always contains exactly one value. The `atomically` method
3-
allows you to modify a set of `TVar` objects with the guarantee that all of the
4-
updates are collectively atomic - they either all happen or none of them do -
5-
consistent - a `TVar` will never enter an illegal state - and isolated - atomic
6-
blocks never interfere with each other when they are running. You may recognise
7-
these properties from database transactions.
1+
`TVar` and `atomically` implement a software transactional memory. A `TVar` is a
2+
single item container that always contains exactly one value. The `atomically`
3+
method allows you to modify a set of `TVar` objects with the guarantee that all
4+
of the updates are collectively atomic - they either all happen or none of them
5+
do - consistent - a `TVar` will never enter an illegal state - and isolated -
6+
atomic blocks never interfere with each other when they are running. You may
7+
recognise these properties from database transactions.
88

99
There are some very important and unusual semantics that you must be aware of:
1010

@@ -24,7 +24,10 @@ We implement nested transactions by flattening.
2424
We only support strong isolation if you use the API correctly. In order words,
2525
we do not support strong isolation.
2626

27-
Our implementation uses a very simple two-phased locking with versioned locks algorithm and lazy writes, as per [1]. In the future we will look at more advanced algorithms, contention management and using existing Java implementations when in JRuby.
27+
Our implementation uses a very simple two-phased locking with versioned locks
28+
algorithm and lazy writes, as per [1]. In the future we will look at more
29+
advanced algorithms, contention management and using existing Java
30+
implementations when in JRuby.
2831

2932
See:
3033

@@ -150,46 +153,94 @@ repeated execution.
150153

151154
## Evaluation
152155

153-
We evaluated the performance of our `TVar` implementation using a bank account simulation with a range of synchronisation implementations. The simulation maintains a set of bank account totals, and runs transactions that either get a summary statement of multiple accounts (a read-only operation) or transfers a sum from one account to another (a read-write operation).
154-
155-
We implemented a bank that does not use any synchronisation (and so creates inconsistent totals in accounts), one that uses a single global (or 'coarse') lock (and so won't scale at all), one that uses one lock per account (and so has a complicated system for locking in the correct order) and one using our `TVar` and `atomically`.
156-
157-
We ran 1 million transactions divided equally between a varying number of threads on a system that has at least that many physical cores. The transactions are made up of a varying mixture of read-only and read-write transactions. We ran each set of transactions thirty times, discarding the first ten and then taking an algebraic mean. These graphs show only the simple mean. Our `tvars-experiments` branch includes the benchmark used, full details of the test system, and all the raw data.
158-
159-
Using JRuby using 75% read-write transactions, we can compare how the different implementations of bank accounts scales to more cores. That is, how much faster it runs if you use more cores.
156+
We evaluated the performance of our `TVar` implementation using a bank account
157+
simulation with a range of synchronisation implementations. The simulation
158+
maintains a set of bank account totals, and runs transactions that either get a
159+
summary statement of multiple accounts (a read-only operation) or transfers a
160+
sum from one account to another (a read-write operation).
161+
162+
We implemented a bank that does not use any synchronisation (and so creates
163+
inconsistent totals in accounts), one that uses a single global (or 'coarse')
164+
lock (and so won't scale at all), one that uses one lock per account (and so has
165+
a complicated system for locking in the correct order) and one using our `TVar`
166+
and `atomically`.
167+
168+
We ran 1 million transactions divided equally between a varying number of
169+
threads on a system that has at least that many physical cores. The transactions
170+
are made up of a varying mixture of read-only and read-write transactions. We
171+
ran each set of transactions thirty times, discarding the first ten and then
172+
taking an algebraic mean. These graphs show only the simple mean. Our `tvars-
173+
experiments` branch includes the benchmark used, full details of the test
174+
system, and all the raw data.
175+
176+
Using JRuby using 75% read-write transactions, we can compare how the different
177+
implementations of bank accounts scales to more cores. That is, how much faster
178+
it runs if you use more cores.
160179

161180
![](https://raw.githubusercontent.com/ruby-concurrency/concurrent-ruby/master/doc/images/tvar/implementation-scalability.png)
162181

163-
We see that the coarse lock implementation does not scale at all, and in fact with more cores only wastes more time in contention for the single global lock. We see that the unsynchronised implementation doesn't seem to scale well - which is strange as there should be no overhead, but we'll explain that in a second. We see that the fine lock implementation seems to scale better, and that the `TVar` implementation scales the best.
182+
We see that the coarse lock implementation does not scale at all, and in fact
183+
with more cores only wastes more time in contention for the single global lock.
184+
We see that the unsynchronised implementation doesn't seem to scale well - which
185+
is strange as there should be no overhead, but we'll explain that in a second.
186+
We see that the fine lock implementation seems to scale better, and that the
187+
`TVar` implementation scales the best.
164188

165189
So the `TVar` implementation *scales* very well, but how absolutely fast is it?
166190

167191
![](https://raw.githubusercontent.com/ruby-concurrency/concurrent-ruby/master/doc/images/tvar/implementation-absolute.png)
168192

169-
Well, that's the downside. The unsynchronised implementation doesn't scale well because it's so fast in the first place, and probably because we're bound on access to the memory - the threads don't have much work to do, so no matter how many threads we have the system is almost always reaching out to the L3 cache or main memory. However remember that the unsynchronised implementation isn't correct - the totals are wrong at the end. The coarse lock implementation has an overhead of locking and unlocking. The fine lock implementation has a greater overhead as as the locking scheme is complicated to avoid deadlock. It scales better, however, actually allowing transactions to be processed in parallel. The `TVar` implementation has a greater overhead still - and it's pretty huge. That overhead is the cost for the simple programming model of an atomic block.
170-
171-
So that's what `TVar` gives you at the moment - great scalability, but it has a high overhead. That's pretty much the state of software transactional memory in general. Perhaps hardware transactional memory will help us, or perhaps we're happy anyway with the simpler and safer programming model that the `TVar` gives us.
172-
173-
We can also use this experiment to compare different implementations of Ruby. We looked at just the `TVar` implementation and compared MRI 2.1.1, Rubinius 2.2.6, and JRuby 1.7.11, again at 75% write transactions.
193+
Well, that's the downside. The unsynchronised implementation doesn't scale well
194+
because it's so fast in the first place, and probably because we're bound on
195+
access to the memory - the threads don't have much work to do, so no matter how
196+
many threads we have the system is almost always reaching out to the L3 cache or
197+
main memory. However remember that the unsynchronised implementation isn't
198+
correct - the totals are wrong at the end. The coarse lock implementation has an
199+
overhead of locking and unlocking. The fine lock implementation has a greater
200+
overhead as as the locking scheme is complicated to avoid deadlock. It scales
201+
better, however, actually allowing transactions to be processed in parallel. The
202+
`TVar` implementation has a greater overhead still - and it's pretty huge. That
203+
overhead is the cost for the simple programming model of an atomic block.
204+
205+
So that's what `TVar` gives you at the moment - great scalability, but it has a
206+
high overhead. That's pretty much the state of software transactional memory in
207+
general. Perhaps hardware transactional memory will help us, or perhaps we're
208+
happy anyway with the simpler and safer programming model that the `TVar` gives
209+
us.
210+
211+
We can also use this experiment to compare different implementations of Ruby. We
212+
looked at just the `TVar` implementation and compared MRI 2.1.1, Rubinius 2.2.6,
213+
and JRuby 1.7.11, again at 75% write transactions.
174214

175215
![](https://raw.githubusercontent.com/ruby-concurrency/concurrent-ruby/master/doc/images/tvar/ruby-scalability.png)
176216

177-
We see that MRI provides no scalability, due to the global interpreter lock (GIL). JRuby seems to scale better than Rubinius for this workload (there are of course other workloads).
217+
We see that MRI provides no scalability, due to the global interpreter lock
218+
(GIL). JRuby seems to scale better than Rubinius for this workload (there are of
219+
course other workloads).
178220

179-
As before we should also look at the absolute performance, not just the scalability.
221+
As before we should also look at the absolute performance, not just the
222+
scalability.
180223

181224
![](https://raw.githubusercontent.com/ruby-concurrency/concurrent-ruby/master/doc/images/tvar/ruby-absolute.png)
182225

183-
Again, JRuby seems to be faster than Rubinius for this experiment. Interestingly, Rubinius looks slower than MRI for 1 core, but we can get around that by using more cores.
226+
Again, JRuby seems to be faster than Rubinius for this experiment.
227+
Interestingly, Rubinius looks slower than MRI for 1 core, but we can get around
228+
that by using more cores.
184229

185-
We've used 75% read-write transactions throughout. We'll just take a quick look at how the scalability varies for different workloads, for scaling between 1 and 2 threads. We'll admit that we used 75% read-write just because it emphasised the differences.
230+
We've used 75% read-write transactions throughout. We'll just take a quick look
231+
at how the scalability varies for different workloads, for scaling between 1 and
232+
2 threads. We'll admit that we used 75% read-write just because it emphasised
233+
the differences.
186234

187235
![](https://raw.githubusercontent.com/ruby-concurrency/concurrent-ruby/master/doc/images/tvar/implementation-write-proportion-scalability.png)
188236

189-
Finally, we can also run on a larger machine. We repeated the experiment using a machine with 64 physical cores and JRuby.
237+
Finally, we can also run on a larger machine. We repeated the experiment using a
238+
machine with 64 physical cores and JRuby.
190239

191240
![](https://raw.githubusercontent.com/ruby-concurrency/concurrent-ruby/master/doc/images/tvar/implementation-scalability.png)
192241

193242
![](https://raw.githubusercontent.com/ruby-concurrency/concurrent-ruby/master/doc/images/tvar/implementation-absolute.png)
194243

195-
Here you can see that `TVar` does become absolutely faster than using a global lock, at the slightly ridiculously thread-count of 50. It's probably not statistically significant anyway.
244+
Here you can see that `TVar` does become absolutely faster than using a global
245+
lock, at the slightly ridiculously thread-count of 50. It's probably not
246+
statistically significant anyway.

0 commit comments

Comments
 (0)