210:, Okasaki compares destructive updates to master chef's knives. Destructive updates cannot be undone, and thus they should not be used unless it is certain that the previous value is not required anymore. However, destructive updates can also allow efficiency that can not be obtained using other techniques. For example, a data structure using an array and destructive updates may be replaced by a similar data structure where the array is replaced by a
25:
234:. This implementation is purely functional as long as the only operations on the stack return a new stack without altering the old stack. However, if the language is not purely functional, the run-time system may be unable to guarantee immutability. This is illustrated by Okasaki, where he shows the concatenation of two singly-linked lists can still be done using an imperative setting.
396:. The only requirement is that the computation of the inefficient operation should end before its result is actually needed. A constant part of the inefficient operation is performed simultaneously with the following call to an efficient operation, so that the inefficient operation is already totally done when it is needed, and each individual operation remains efficient.
358:
be done only when its result is actually required. Therefore, the code of a purely functional data structure can, without loss of efficiency, consider similarly data that will effectively be used and data that will be ignored. The only computation required is for the first kind of data; that is what will actually be performed.
258:
a purely functional version of this function is likely to have to return the new list along with the removed element. In the most general case, a program converted in this way must return the "state" or "store" of the program as an additional result from every function call. Such a program is said to be written in
253:
One of the central challenges in adapting existing code to use purely functional data structures lies in the fact that mutable data structures provide "hidden outputs" for functions that use them. Rewriting these functions to use purely functional data structures requires adding these data structures
361:
One of the key tools in building efficient, purely functional data structures is memoization. When a computation is done, it is saved and does not have to be performed a second time. This is particularly important in lazy implementations; additional evaluations may require the same result, but it is
257:
For instance, consider a function that accepts a mutable list, removes the first element from the list, and returns that element. In a purely functional setting, removing an element from the list produces a new and shorter list, but does not update the original one. In order to be useful, therefore,
412:
add the restriction that the rear list is only as long as the front list. To ensure that the front list stays longer than the rear list, the rear list is reversed and appended to the front list. Since this operation is inefficient, it is not performed immediately. Instead, it is spread out over the
407:
are composed of two singly-linked lists: the front and the reversed rear. Elements are added to the rear list and are removed from the front list. Furthermore, whenever the front queue is empty, the rear queue is reversed and becomes the front, while the rear queue becomes empty. The amortized time
357:
Lazy evaluation is particularly interesting in a purely functional language because the order of the evaluation never changes the result of a function. Therefore, lazy evaluation naturally becomes an important part of the construction of purely functional data structures. It allows a computation to
384:
In general, having inefficient operations is not acceptable for persistent data structures, because this very operation can be called many times. It is not acceptable either for real-time or for imperative systems, where the user may require the time taken by the operation to be predictable.
380:
can then be used to prove that the average running time of the operations is efficient. That is to say, the few inefficient operations are rare enough, and do not change the asymptotical evolution of time complexity when a sequence of operations is considered.
462:
199:, and basic types such as integers, characters, strings. Such a data structure is necessarily persistent. However, not all persistent data structures are purely functional. For example, a
408:
complexity of each operation is constant. Each cell of the list is added, reversed and removed at most once. In order to avoid an inefficient operation where the rear list is reversed,
513:
413:
subsequent operations. Thus, each cell is computed before it is needed, and the new front list is totally computed before a new inefficient operation needs to be called.
374:, admit operations that are efficient most of the time (e.g., constant time for dynamic arrays), and rarely inefficient (e.g., linear time for dynamic arrays).
176:, that is, an update which cannot be reversed. Once a program writes a value in some index of the array, its previous value can not be retrieved anymore.
42:
89:
521:
61:
68:
349:
describes techniques used to design and implement purely functional data structures, a small subset of which are summarized below.
75:
168:
have the property of keeping previous versions of themselves unmodified. On the other hand, non-persistent structures such as
57:
392:
In order to avoid those problems, some data structures allow for the inefficient operation to be postponed—this is called
548:
137:. The main difference between an arbitrary data structure and a purely functional one is that the latter is (strongly)
480:
191:. In practice, it means that the data structures must be built using only persistent data structures such as tuples,
108:
553:
203:
is a data-structure which is persistent and which is implemented using an array, thus is not purely functional.
501:
188:
46:
237:
In order to ensure that a data structure is used in a purely functional way in an impure functional language,
82:
242:
218:, which admits a purely functional implementation. But the access cost may increase from constant time to
472:
184:
134:
422:
165:
142:
507:
141:. This restriction ensures the data structure possesses the advantages of immutable objects: (full)
295:
330:
35:
393:
299:
211:
230:
A data structure is never inherently functional. For example, a stack can be implemented as a
169:
8:
259:
238:
517:
386:
376:
303:
275:
231:
476:
270:
Here is a list of abstract data structures with purely functional implementations:
495:
219:
200:
138:
122:
409:
404:
289:
282:
150:
532:
313:
130:
542:
468:
371:
346:
215:
146:
526:
504:
by James R. Driscoll, Neil Sarnak, Daniel D. Sleator, Robert E. Tarjan (PDF)
325:
317:
196:
307:
154:
370:
Some data structures, even those that are not purely functional such as
225:
149:. Efficient purely functional data structures may require the use of
24:
322:
Random access list, implemented as a skew-binary random access list
192:
245:
can be used to ensure manipulation via authorized functions only.
510:
by James R. Driscoll, Daniel D. Sleator, Robert E. Tarjan (PDF)
527:
What's new in purely functional data structures since
Okasaki?
362:
impossible to know which evaluation will require it first.
16:
Data structure implementable in purely functional languages
385:
Furthermore, this unpredictability complicates the use of
248:
365:
226:
Ensuring that a data structure is purely functional
49:. Unsourced material may be challenged and removed.
183:is a data structure which can be implemented in a
540:
458:
456:
454:
452:
450:
448:
446:
444:
442:
440:
438:
336:
274:Stack (first in, last out) implemented as a
435:
352:
302:indexed by ordered keys, implemented as a
109:Learn how and when to remove this message
249:Using purely functional data structures
541:
508:Fully Persistent Lists with Catenation
133:that can be directly implemented in a
288:Double-ended queue, implemented as a
498:thesis by Chris Okasaki (PDF format)
47:adding citations to reliable sources
18:
13:
58:"Purely functional data structure"
14:
565:
502:Making Data-Structures Persistent
496:Purely Functional Data Structures
489:
464:Purely functional data structures
399:
366:Amortized analysis and scheduling
343:Purely Functional Data Structures
208:Purely functional data structures
181:purely functional data structure
127:purely functional data structure
23:
34:needs additional citations for
1:
531:Theoretical Computer Science
428:
214:, a random access list, or a
160:
145:, quick copy of objects, and
290:real-time double-ended queue
7:
416:
265:
10:
570:
549:Functional data structures
514:Persistent Data Structures
473:Cambridge University Press
185:purely functional language
166:Persistent data structures
135:purely functional language
423:Persistent data structure
337:Design and implementation
306:, or more generally by a
353:Laziness and memoization
298:of ordered elements and
281:Queue, implemented as a
331:Zipper (data structure)
554:Functional programming
345:, computer scientist
254:as explicit outputs.
43:improve this article
522:Advanced Algorithms
316:, implemented as a
260:store-passing style
518:MIT OpenCourseWare
276:singly linked list
232:singly-linked list
174:destructive update
119:
118:
111:
93:
561:
483:
460:
410:real-time queues
405:Amortized queues
220:logarithmic time
201:persistent array
123:computer science
114:
107:
103:
100:
94:
92:
51:
27:
19:
569:
568:
564:
563:
562:
560:
559:
558:
539:
538:
492:
487:
486:
461:
436:
431:
419:
402:
368:
355:
339:
283:real-time queue
268:
251:
228:
163:
151:lazy evaluation
115:
104:
98:
95:
52:
50:
40:
28:
17:
12:
11:
5:
567:
557:
556:
551:
537:
536:
533:Stack Exchange
524:
511:
505:
499:
491:
490:External links
488:
485:
484:
433:
432:
430:
427:
426:
425:
418:
415:
401:
400:Example: queue
398:
372:dynamic arrays
367:
364:
354:
351:
338:
335:
334:
333:
328:
323:
320:
314:Priority queue
311:
304:red–black tree
293:
286:
279:
267:
264:
250:
247:
227:
224:
162:
159:
131:data structure
117:
116:
31:
29:
22:
15:
9:
6:
4:
3:
2:
566:
555:
552:
550:
547:
546:
544:
535:
534:
528:
525:
523:
519:
515:
512:
509:
506:
503:
500:
497:
494:
493:
482:
481:0-521-66350-4
478:
474:
470:
469:Chris Okasaki
466:
465:
459:
457:
455:
453:
451:
449:
447:
445:
443:
441:
439:
434:
424:
421:
420:
414:
411:
406:
397:
395:
390:
388:
382:
379:
378:
373:
363:
359:
350:
348:
347:Chris Okasaki
344:
332:
329:
327:
324:
321:
319:
315:
312:
309:
305:
301:
297:
294:
291:
287:
284:
280:
277:
273:
272:
271:
263:
261:
255:
246:
244:
240:
235:
233:
223:
221:
217:
216:balanced tree
213:
209:
204:
202:
198:
197:product types
194:
190:
186:
182:
177:
175:
171:
167:
158:
156:
152:
148:
147:thread safety
144:
140:
136:
132:
128:
124:
113:
110:
102:
91:
88:
84:
81:
77:
74:
70:
67:
63:
60: –
59:
55:
54:Find sources:
48:
44:
38:
37:
32:This article
30:
26:
21:
20:
530:
463:
403:
391:
383:
377:Amortization
375:
369:
360:
356:
342:
341:In his book
340:
326:Hash consing
318:Brodal queue
269:
256:
252:
236:
229:
207:
206:In the book
205:
180:
179:Formally, a
178:
173:
164:
126:
120:
105:
99:January 2017
96:
86:
79:
72:
65:
53:
41:Please help
36:verification
33:
387:parallelism
308:search tree
155:memoization
143:persistency
543:Categories
429:References
394:scheduling
296:(Multi)set
187:, such as
161:Definition
69:newspapers
516:from the
193:sum types
139:immutable
475:, 1998,
417:See also
266:Examples
172:admit a
520:course
243:classes
239:modules
189:Haskell
83:scholar
479:
170:arrays
85:
78:
71:
64:
56:
129:is a
90:JSTOR
76:books
477:ISBN
153:and
125:, a
62:news
529:on
467:by
300:map
241:or
212:map
121:In
45:by
545::
471:,
437:^
389:.
262:.
222:.
195:,
157:.
310:,
292:,
285:,
278:,
112:)
106:(
101:)
97:(
87:·
80:·
73:·
66:·
39:.
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.