841:
tries to acquire the lock it immediately succeeds and next_ticket is incremented to 1 (Row 2). When P3 tries to acquire the lock it gets 1 for its my_ticket, next ticket is incremented to 2, and it must wait since now_serving is still 0 (Row 3). Next, when P2 attempts to acquire the lock it gets 2 for its my_ticket, next_ticket is incremented to 3, and it must also wait due to now_serving still being 0 (Row 4). P1 now releases the lock by incrementing now_serving to 1, thus allowing P3 to acquire it due its my_ticket value of 1 (Row 5). Now P3 releases the lock, incrementing now_serving to 2, allowing P2 to acquire it (Row 6). While P2 has the lock, P4 attempts to acquire it, gets a my_ticket value of 3, increments next_ticket to 4, and must wait since now_serving is still 2 (Row 7). When P2 releases the lock, it increments now_serving to 3, allowing P4 to get it (Row 8). Finally, P4 releases the lock, incrementing now_serving to 4 (Row 9). No currently waiting threads have this ticket number, so the next thread to arrive will get 4 for its ticket and immediately acquire the lock.
533:
each thread will spin in the while loop while now_serving isn't equal to its my_ticket. Once now_serving becomes equal to a given thread's my_ticket they are allowed to return from ticketLock_acquire() and enter the critical section of code. After the critical section of the code, the thread performs ticketLock_release() which increments now_serving. This allows the thread with the next sequential my_ticket to exit from ticketLock_acquire() and enter the critical section. Since the my_ticket values are acquired in the order of thread arrival at the lock, subsequent acquisition of the lock is guaranteed to also be in this same order. Thus, fairness of lock acquisition is ensured, enforcing a FIFO ordering.
314:
critical section (Time 3), P1 executes an unlock, releasing the lock. Since P2 has faster access to the lock than P3, it acquires the lock next and enters the critical section (Time 4). While P2 is in the critical section, P1 once again attempts to acquire the lock but canβt (Time 5), forcing it to spin wait along with P3. Once P2 finishes the critical section and issues an unlock, both P1 and P3 simultaneously attempt to acquire it once again (Time 6). But P1, with its faster access time wins again, thus entering the critical section (Time 7). This pattern of P3 being unable to obtain the lock will continue indefinitely until either P1 or P2 stops attempting to acquire it.
57:
sequentially numbered tickets upon arrival. The dispenser usually has a sign above or near it stating something like "Please take a number". There is also typically a dynamic sign, usually digital, that displays the ticket number that is now being served. Each time the next ticket number (customer) is ready to be served, the "Now
Serving" sign is incremented and the number called out. This allows all of the waiting customers to know how many people are still ahead of them in the queue or line.
49:
542:
queue of processors waiting their turn in the critical section. This is followed by each getting the lock in turn, allowing the next in line to acquire it as the previous holder leaves. Also note that another processor can arrive at any time during the sequence of lock acquire/releases by other processors, and simply waits its turn.
342:
that adds this desired attribute. The following pseudocode shows the operations for initializing the lock, acquiring the lock, and releasing the lock. A call to ticketLock_acquire would precede the critical section of the code and ticketLock_release would follow it. Each processor will keep track of
68:
of lock acquisition and works as follows; there are two integer values which begin at 0. The first value is the queue ticket, the second is the dequeue ticket. The queue ticket is the thread's position in the queue, and the dequeue ticket is the ticket, or queue position, that now has the lock (Now
98:
out of execution for a long time due to inability to acquire a lock in favor of other threads. With no fairness guarantees, a situation can arise where a thread (or multiple threads) can take a disproportionately long time to execute as compared to others. A simple example will now be presented to
1044:
Another problem comes from releasing a lock. All threads are spinning on one variable, so when the lock is released there are Σ¨(p) invalidations (as well as Σ¨(p) acquisitions). This is because all threads must reload their block into the cache and perform a test to determine their admittance to the
536:
The following table shows an example of ticket lock in action in a system with four processors (P1, P2, P3, P4) competing for access to the critical section. Following along with the "Action" column, the outcome based on the above pseudocode can be observed. For each row, the variable values shown
76:
of this operation is required to prevent two threads from simultaneously being able to obtain the same ticket number. It then compares its ticket value, before the increment, with the dequeue ticket's value. If they are the same, the thread is permitted to enter the critical section. If they are
840:
The first step, prior to use of the lock, is initialization of all lock variables (Row 1). Having next_ticket and now_serving initialized to 0 ensures that the first thread that attempts to get the lock will get ticket 0, thus acquiring the lock due to its ticket matching now_serving. So when P1
532:
is called, returning the current value of next_ticket into the thread private my_ticket and incrementing the shared next_ticket. It is important to note that the fetch and increment is done atomically, thereby not allowing any other concurrent attempts at access. Once my_ticket has been received,
313:
Initially, the lock is free and all three processors attempt to acquire the lock simultaneously (Time 1). Due to P1 having the fastest access time to the lock, it acquires it first and enters the critical section. P2 and P3 now spin while P1 is in the critical section (Time 2). Upon exiting the
541:
the indicated action(s) have completed. The key point to note from the example is that the initial attempts by all four processors to acquire the lock results in only the first to arrive actually getting the lock. All subsequent attempts, while the first still holds the lock, serves to form the
155:
time to the location of the shared lock variable. The order of increasing access time to the lock variable for the three processors is P1 < P2 < P3. So P1 is always the most advantaged at acquiring the lock, followed by P2, with P3 being most disadvantaged. How this situation leads to
56:
The basic concept of a ticket lock is similar to the ticket queue management system. This is the method that many bakeries and delis use to serve customers in the order that they arrive, without making them stand in a line. Generally, there is some type of dispenser from which customers pull
1075:
uses a similar concept of a "ticket" or "counter" but does not make the use of atomic hardware operations. It was designed for fault tolerance rather than performance. Rather than all processors continuously examining the release counter, the bakery lock spins on examining the tickets of its
317:
This illustrates the need to ensure some level of fairness in lock acquisition in certain circumstances. Not all locks have mechanisms that ensure any level of fairness, leaving the potential for situations similar to that illustrated above. See the
1062:
environments where it had disadvantages. As of July 2010, work is in progress to enable the use of ticket locks in paravirtualization. As of March 2015 this type of locking scheme has been reemployed by Red Hat
Enterprise Linux in their system.
81:
or yield. When a thread leaves the critical section controlled by the lock, it atomically increments the dequeue ticket. This permits the next waiting thread, the one with the next sequential ticket number, to enter the critical section.
861:
based spinlock algorithms on modern machines. Consider the table below when comparing various types of spin based locks. The more basic locking mechanisms have lower uncontended latency than the advanced locking mechanisms.
1222:
1041:
Another major disadvantage of a ticket lock is that it is non-scalable. Research indicated that as processors are added to a Ticket Lock system, performance appears to decay exponentially.
1038:
One disadvantage is that there is a higher uncontended latency due to the extra instructions required to read and test the value that all threads are spinning on before a lock is released.
1238:
102:
Assume a case where three threads, each executing on one of three processors, are executing the following pseudocode that uses a lock with no consideration for fairness.
1250:
94:
in lock acquisition applies to the order in which threads acquire a lock successfully. If some type of fairness is implemented, it prevents a thread from being
1079:
Queue-based spin locks guarantee fairness by maintaining a queue of waiters and by granting the lock to the first waiter during an unlock operation.
160:
in the absence of a fairness guarantee is shown in the following illustration of the execution of the above pseudocode by these three processors.
1285:
1159:
1134:
1006:
One advantage of a ticket lock over other spinlock algorithms is that it is fair. The waiting threads are processed in a
1269:. Proceedings of the eighth ACM SIGPLAN symposium on Principles and practices of parallel programming, pp. 44-52, 2001.
61:
1183:
1072:
524:
Following along with the pseudocode above we can see that each time a processor tries to acquire a lock with
1054:
The ticket lock was introduced by Mellor-Crummey and Scott in 1991. This algorithm was introduced into the
1196:
Boyd-Wickizer, Silas, et al. "Non-scalable locks are dangerous." Proceedings of the Linux
Symposium. 2012.
884:
1092:, another atomic instruction that can be used in place of fetch-and-increment to implement a ticket lock
1025:
892:
331:
152:
33:
1264:
91:
65:
151:
Now further assume the physical arrangement of the three processors, P1, P2, and P3, results in a
1018:
73:
25:
77:
not the same, then another thread must already be in the critical section and this thread must
99:
show how a thread could be excessively delayed due to a lack of fairness in lock acquisition.
879:
8:
1011:
164:
157:
95:
72:
When a thread arrives, it atomically obtains and then increments the queue ticket. The
1059:
1007:
52:
Example of a ticket and "Now
Serving" sign used in the Ticket Queue Management System.
1155:
1130:
1127:
Fundamentals of parallel computer architecture : multichip and multicore systems
858:
334:
system it is important to have a lock implementation that guarantees some level of
37:
17:
1197:
322:
section below for examples of locks that don't implement any fairness guarantees.
1024:
Storage is not necessarily a problem as all threads spin on one variable, unlike
1021:
occurring since only one thread at a time tries to enter the critical section.
1279:
1150:
Sottile, Matthew J.; Mattson, Timothy G.; Rasmussen, Craig E (Sep 28, 2009).
1089:
529:
1226:
1181:
1055:
874:
854:
850:
78:
1184:"Algorithms for Scalable Synchronization on Shared-Memory Multiprocessors"
1182:
John M. Mellor-Crummey and
Michael L. Scott; et al. (February 1991).
1209:
339:
29:
1212:, Linux Weekly News, February 6, 2008. Retrieved 23. July, 2010.
1028:(ABQL) who have threads spin on individual elements of an array.
346:
Yan
Solihin's pseudocode example is listed in the diagram below.
48:
1010:
basis as the dequeue ticket integer increases, thus preventing
1223:
paravirt: introduce a "lock-byte" spinlock implementation
1149:
853:
implementation can have lower latency than the simpler
1239:
Revert "Merge branch 'xen/pvticketlock' into xen/next"
866:
343:
its turn via the value of each processor's my_ticket.
1152:
1058:in 2008 due to its advantages, but was omitted in
335:
1277:
325:
1198:http://pdos.csail.mit.edu/papers/linux:lock.pdf
85:
64:queue-based mechanism. It adds the benefit of
1154:. Boca Raton, FL, USA: CRC Press. p. 56.
1263:Michael L. Scott and William N. Scherer III.
1266:Scalable Queue-Based Spin Locks with Timeout
338:. The ticket lock is an implementation of
1017:A ticket lock algorithm also prevents the
319:
47:
1124:
738:P4 tries to acquire lock (fail + wait)
660:P2 tries to acquire lock (fail + wait)
634:P3 tries to acquire lock (fail + wait)
1278:
1143:
844:
332:Non-Uniform Memory Architecture (NUMA)
60:Like this system, a ticket lock is a
32:that uses "tickets" to control which
1177:
1175:
1173:
1171:
1120:
1118:
1116:
1114:
1112:
1110:
1108:
1106:
764:P2 releases lock, P4 acquires lock
712:P3 releases lock, P2 acquires lock
686:P1 releases lock, P3 acquires lock
608:P1 tries to acquire lock (succeed)
36:of execution is allowed to enter a
24:is a synchronization mechanism, or
13:
547:Four Processor Ticket Lock Example
14:
1297:
1168:
1129:. Solihin Pub. pp. 262β269.
1103:
1032:
953:
950:
947:
944:
729:
703:
677:
651:
645:
625:
622:
619:
599:
596:
593:
590:
1066:
1286:Concurrency control algorithms
1257:
1243:
1232:
1215:
1202:
1190:
1:
1096:
1000:
326:Implementation of ticket lock
336:fairness of lock acquisition
86:Fairness of lock acquisition
7:
1083:
885:Load-link/store-conditional
43:
10:
1302:
1073:Lamport's bakery algorithm
1049:
1026:array-based queueing locks
153:non-uniform memory access
62:first in first out (FIFO)
348:
104:
1019:thundering herd problem
273:lock attempt (success)
220:lock attempt (success)
189:lock attempt (success)
279:lock attempt (failed)
245:lock attempt (failed)
223:lock attempt (failed)
195:lock attempt (failed)
192:lock attempt (failed)
53:
1221:Jeremy Fitzhardinge,
1125:Solihin, Yan (2009).
919:1 Release max traffic
880:Test and Test-and-set
51:
526:ticketLock_acquire()
28:, that is a type of
899:Uncontended latency
867:
845:Comparison of locks
549:
320:Comparison of locks
168:
1251:"Ticket Spinlocks"
1008:first-in first-out
979:Fairness guarantee
865:
545:
489:ticketLock_release
411:ticketLock_acquire
163:
54:
1208:Jonathan Corbet,
1161:978-1-4200-7214-3
1136:978-0-9841630-0-7
1045:critical section.
998:
997:
838:
837:
790:P4 releases lock
582:Initialized to 0
311:
310:
287:critical section
234:critical section
203:critical section
158:thread starvation
26:locking algorithm
1293:
1270:
1261:
1255:
1254:
1247:
1241:
1236:
1230:
1219:
1213:
1210:Ticket spinlocks
1206:
1200:
1194:
1188:
1187:
1179:
1166:
1165:
1147:
1141:
1140:
1122:
868:
864:
550:
544:
527:
520:
517:
514:
511:
508:
505:
502:
499:
496:
493:
490:
487:
484:
481:
478:
475:
472:
469:
466:
463:
460:
457:
454:
451:
448:
445:
442:
439:
436:
433:
430:
427:
424:
421:
418:
415:
412:
409:
406:
403:
400:
397:
394:
391:
388:
385:
382:
379:
376:
373:
370:
367:
364:
361:
358:
355:
352:
169:
162:
147:
144:
141:
138:
135:
132:
129:
126:
123:
120:
117:
114:
111:
108:
38:critical section
18:computer science
1301:
1300:
1296:
1295:
1294:
1292:
1291:
1290:
1276:
1275:
1274:
1273:
1262:
1258:
1249:
1248:
1244:
1237:
1233:
1220:
1216:
1207:
1203:
1195:
1191:
1180:
1169:
1162:
1148:
1144:
1137:
1123:
1104:
1099:
1086:
1069:
1060:paravirtualized
1052:
1035:
1003:
847:
525:
522:
521:
518:
515:
512:
509:
506:
503:
500:
497:
494:
491:
488:
485:
482:
479:
476:
473:
470:
467:
464:
461:
458:
455:
452:
449:
446:
443:
440:
437:
434:
431:
428:
425:
422:
419:
416:
413:
410:
407:
404:
401:
398:
395:
392:
389:
386:
383:
380:
377:
374:
371:
368:
365:
362:
359:
356:
353:
351:ticketLock_init
350:
328:
149:
148:
145:
142:
139:
136:
133:
130:
127:
124:
121:
118:
115:
112:
109:
106:
88:
46:
12:
11:
5:
1299:
1289:
1288:
1272:
1271:
1256:
1242:
1231:
1229:, July 7, 2008
1214:
1201:
1189:
1167:
1160:
1142:
1135:
1101:
1100:
1098:
1095:
1094:
1093:
1085:
1082:
1081:
1080:
1077:
1068:
1065:
1051:
1048:
1047:
1046:
1042:
1039:
1034:
1031:
1030:
1029:
1022:
1015:
1002:
999:
996:
995:
992:
989:
986:
983:
980:
976:
975:
972:
969:
966:
963:
960:
956:
955:
952:
949:
946:
943:
940:
936:
935:
932:
929:
926:
923:
920:
916:
915:
912:
909:
906:
903:
900:
896:
895:
890:
887:
882:
877:
872:
846:
843:
836:
835:
832:
829:
826:
823:
820:
817:
814:
810:
809:
806:
803:
800:
797:
794:
791:
788:
784:
783:
780:
777:
774:
771:
768:
765:
762:
758:
757:
754:
751:
748:
745:
742:
739:
736:
732:
731:
728:
725:
722:
719:
716:
713:
710:
706:
705:
702:
699:
696:
693:
690:
687:
684:
680:
679:
676:
673:
670:
667:
664:
661:
658:
654:
653:
650:
647:
644:
641:
638:
635:
632:
628:
627:
624:
621:
618:
615:
612:
609:
606:
602:
601:
598:
595:
592:
589:
586:
583:
580:
576:
575:
572:
569:
566:
563:
560:
557:
554:
349:
327:
324:
309:
308:
305:
302:
299:
295:
294:
291:
288:
285:
281:
280:
277:
274:
271:
267:
266:
263:
260:
257:
253:
252:
249:
246:
243:
239:
238:
235:
232:
229:
225:
224:
221:
218:
215:
211:
210:
207:
204:
201:
197:
196:
193:
190:
187:
183:
182:
179:
176:
173:
105:
90:The notion of
87:
84:
45:
42:
9:
6:
4:
3:
2:
1298:
1287:
1284:
1283:
1281:
1268:
1267:
1260:
1252:
1246:
1240:
1235:
1228:
1224:
1218:
1211:
1205:
1199:
1193:
1185:
1178:
1176:
1174:
1172:
1163:
1157:
1153:
1146:
1138:
1132:
1128:
1121:
1119:
1117:
1115:
1113:
1111:
1109:
1107:
1102:
1091:
1090:fetch-and-add
1088:
1087:
1078:
1074:
1071:
1070:
1064:
1061:
1057:
1043:
1040:
1037:
1036:
1033:Disadvantages
1027:
1023:
1020:
1016:
1013:
1009:
1005:
1004:
993:
990:
987:
984:
981:
978:
977:
973:
970:
967:
964:
961:
958:
957:
941:
938:
937:
933:
930:
927:
924:
921:
918:
917:
913:
910:
907:
904:
901:
898:
897:
894:
891:
888:
886:
883:
881:
878:
876:
873:
870:
869:
863:
860:
856:
852:
842:
833:
830:
827:
824:
821:
818:
815:
812:
811:
807:
804:
801:
798:
795:
792:
789:
786:
785:
781:
778:
775:
772:
769:
766:
763:
760:
759:
755:
752:
749:
746:
743:
740:
737:
734:
733:
726:
723:
720:
717:
714:
711:
708:
707:
700:
697:
694:
691:
688:
685:
682:
681:
674:
671:
668:
665:
662:
659:
656:
655:
648:
642:
639:
636:
633:
630:
629:
616:
613:
610:
607:
604:
603:
587:
584:
581:
578:
577:
574:P4 my_ticket
573:
571:P3 my_ticket
570:
568:P2 my_ticket
567:
565:P1 my_ticket
564:
561:
558:
555:
552:
551:
548:
543:
540:
534:
531:
530:fetch_and_inc
450:fetch_and_inc
347:
344:
341:
337:
333:
323:
321:
315:
306:
303:
300:
297:
296:
292:
289:
286:
283:
282:
278:
276:release lock
275:
272:
269:
268:
264:
261:
258:
255:
254:
250:
247:
244:
241:
240:
236:
233:
230:
227:
226:
222:
219:
217:release lock
216:
213:
212:
208:
205:
202:
199:
198:
194:
191:
188:
185:
184:
180:
177:
174:
171:
170:
166:
161:
159:
154:
103:
100:
97:
93:
83:
80:
75:
70:
67:
63:
58:
50:
41:
39:
35:
31:
27:
23:
19:
1265:
1259:
1245:
1234:
1227:Linux kernel
1217:
1204:
1192:
1151:
1145:
1126:
1067:Related work
1056:Linux kernel
1053:
939:Wait traffic
875:test-and-set
855:test-and-set
851:Linux kernel
848:
839:
562:now_serving
559:next_ticket
546:
538:
535:
523:
345:
329:
316:
312:
150:
101:
89:
71:
59:
55:
21:
15:
1186:. ACM TOCS.
513:now_serving
501:now_serving
471:now_serving
456:next_ticket
435:now_serving
423:next_ticket
396:next_ticket
387:now_serving
375:now_serving
363:next_ticket
22:ticket lock
1097:References
1012:starvation
1001:Advantages
537:are those
165:Starvation
69:Serving).
477:my_ticket
444:my_ticket
79:busy-wait
74:atomicity
1280:Category
1084:See also
871:Criteria
859:exchange
340:spinlock
131:critical
92:fairness
66:fairness
44:Overview
30:spinlock
1050:History
959:Storage
914:Higher
556:Action
134:section
96:starved
1158:
1133:
1076:peers.
911:Higher
902:Lowest
889:Ticket
167:of P3
143:unlock
34:thread
974:Σ¨(p)
934:Σ¨(1)
908:Lower
905:Lower
539:after
462:while
330:In a
293:spin
290:spin
265:spin
259:spin
251:spin
237:spin
209:spin
206:spin
172:Time
107:while
1156:ISBN
1131:ISBN
994:Yes
971:Σ¨(1)
968:Σ¨(1)
965:Σ¨(1)
962:Σ¨(1)
942:High
931:Σ¨(p)
928:Σ¨(p)
925:Σ¨(p)
922:Σ¨(p)
893:ABQL
849:The
816:...
553:Row
307:...
304:...
301:...
298:...
262:...
248:...
231:...
122:lock
20:, a
991:Yes
857:or
813:10
510:++*
495:int
429:int
417:int
369:int
357:int
181:P3
178:P2
175:P1
16:In
1282::
1225:,
1170:^
1105:^
988:No
985:No
982:No
954:β
834:3
831:1
828:2
825:0
822:4
819:4
808:3
805:1
802:2
799:0
796:4
793:4
787:9
782:3
779:1
776:2
773:0
770:3
767:4
761:8
756:3
753:1
750:2
747:0
744:2
741:4
735:7
730:β
727:1
724:2
721:0
718:2
715:3
709:6
704:β
701:1
698:2
695:0
692:1
689:3
683:5
678:β
675:1
672:2
669:0
666:0
663:3
657:4
652:β
649:1
646:β
643:0
640:0
637:2
631:3
626:β
623:β
620:β
617:0
614:0
611:1
605:2
600:β
597:β
594:β
591:β
588:0
585:0
579:1
528:,
483:{}
474:!=
459:);
284:8
270:7
256:6
242:5
228:4
214:3
200:2
186:1
40:.
1253:.
1164:.
1139:.
1014:.
951:β
948:β
945:β
519:}
516:;
507:{
504:)
498:*
492:(
486:}
480:)
468:*
465:(
453:(
447:=
441:{
438:)
432:*
426:,
420:*
414:(
408:}
405:;
402:0
399:=
393:*
390:=
384:*
381:{
378:)
372:*
366:,
360:*
354:(
146:}
140:}
137:β¦
128:β¦
125:{
119:{
116:)
113:1
110:(
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.