Knowledge

Implicit parallelism

Source 📝

1129: 38: 1151: 260:
A programmer that writes implicitly parallel code does not need to worry about task division or process communication, focusing instead on the problem that his or her program is intended to solve. Implicit parallelism generally facilitates the design of parallel programs and therefore results in a
272:
function, is a useful feature in of itself. By using implicit parallelism, languages effectively have to provide such useful constructs to users simply to support required functionality (a language without a decent
154:
inherent to the computations expressed by some of the language's constructs. A pure implicitly parallel language does not need special directives, operators or functions to enable parallel execution, as opposed to
323: 293:
also note that their early experiments with implicit parallelism showed that implicit parallelism made debugging difficult and object models unnecessarily awkward.
285:
Languages with implicit parallelism reduce the control that the programmer has over the parallel execution of the program, resulting sometimes in less-than-optimal
455: 349: 304:. If implicit parallelism is desired, this creates a new requirement for constructs and keywords to support code that cannot be threaded or distributed. 296:
A larger issue is that every program has some parallel and some serial logic. Binary I/O, for example, requires support for such serial operations as
264:
Many of the constructs necessary to support this also add simplicity or clarity even in the absence of actual parallelism. The example above, of
252:
The compiler or interpreter can calculate the sine of each element independently, spreading the effort across multiple processors if available.
545: 397: 102: 17: 1192: 74: 526: 332: 55: 566: 81: 793: 218:
of each in turn), a language that provides implicit parallelism might allow the programmer to write the instruction thus:
816: 705: 561: 88: 811: 788: 121: 390: 357: 1216: 783: 598: 70: 890: 804: 753: 59: 1114: 948: 799: 486: 1211: 1185: 1133: 1079: 539: 383: 210:
If a particular problem involves performing the same operation on a group of numbers (such as taking the
163: 1058: 853: 738: 700: 550: 440: 1074: 1053: 998: 885: 875: 848: 710: 175: 1028: 654: 593: 506: 199: 191: 171: 95: 1089: 1084: 943: 534: 290: 147: 48: 1178: 828: 760: 664: 556: 511: 1166: 920: 880: 833: 823: 618: 481: 420: 860: 748: 733: 720: 516: 156: 8: 1023: 978: 778: 644: 286: 1048: 897: 870: 695: 659: 649: 608: 450: 430: 425: 406: 265: 151: 1094: 770: 728: 623: 328: 1158: 1104: 903: 838: 685: 501: 496: 491: 460: 135: 968: 908: 843: 690: 680: 613: 603: 445: 435: 1162: 1099: 915: 572: 465: 1205: 988: 865: 588: 1109: 983: 958: 375: 215: 37: 1033: 1013: 938: 274: 143: 1038: 1018: 993: 628: 179: 1150: 1008: 1003: 183: 195: 167: 142:
is a characteristic of a programming language that allows a
1043: 973: 963: 211: 187: 953: 930: 162:
Programming languages with implicit parallelism include
261:
substantial improvement of programmer productivity.
62:. Unsourced material may be challenged and removed. 277:, for example, is one few programmers will use). 1203: 321:Nikhil, Rishiyur; Arvind (February 20, 2024). 1186: 391: 27:Inherent parallelism in expressed computation 320: 347: 1193: 1179: 398: 384: 122:Learn how and when to remove this message 14: 1204: 405: 379: 1145: 60:adding citations to reliable sources 31: 324:Implicit Parallel Programming in pH 24: 25: 1228: 1149: 1128: 1127: 280: 36: 599:Analysis of parallel algorithms 47:needs additional citations for 341: 327:. Morgan Kaufmann Publishers. 314: 13: 1: 546:Simultaneous and heterogenous 348:Seif Haridi (June 14, 2006). 255: 150:to automatically exploit the 1165:. You can help Knowledge by 1134:Category: Parallel computing 7: 10: 1233: 1144: 441:High-performance computing 205: 1123: 1075:Automatic parallelization 1067: 929: 769: 719: 711:Application checkpointing 673: 637: 581: 525: 474: 413: 307: 220: 18:Implicit parallelization 1090:Embarrassingly parallel 1085:Deterministic algorithm 291:Oz programming language 1217:Computer science stubs 805:Associative processing 761:Non-blocking algorithm 567:Clustered multi-thread 71:"Implicit parallelism" 921:Hardware acceleration 834:Superscalar processor 824:Dataflow architecture 421:Distributed computing 800:Pipelined processing 749:Explicit parallelism 744:Implicit parallelism 734:Dataflow programming 289:. The makers of the 157:explicit parallelism 140:implicit parallelism 56:improve this article 1024:Parallel Extensions 829:Pipelined processor 287:parallel efficiency 1212:Parallel computing 898:Massively parallel 876:distributed shared 696:Cache invalidation 660:Instruction window 451:Manycore processor 431:Massively parallel 426:Parallel computing 407:Parallel computing 266:list comprehension 1174: 1173: 1142: 1141: 1095:Parallel slowdown 729:Stream processing 619:Karp–Flatt metric 334:978-1-55860-644-9 132: 131: 124: 106: 16:(Redirected from 1224: 1195: 1188: 1181: 1159:computer science 1153: 1146: 1131: 1130: 1105:Software lockout 904:Computer cluster 839:Vector processor 794:Array processing 779:Flynn's taxonomy 686:Memory coherence 461:Computer network 400: 393: 386: 377: 376: 370: 369: 367: 365: 356:. Archived from 345: 339: 338: 318: 303: 299: 271: 248: 245: 242: 239: 236: 233: 230: 227: 224: 136:computer science 127: 120: 116: 113: 107: 105: 64: 40: 32: 21: 1232: 1231: 1227: 1226: 1225: 1223: 1222: 1221: 1202: 1201: 1200: 1199: 1143: 1138: 1119: 1063: 969:Coarray Fortran 925: 909:Beowulf cluster 765: 715: 706:Synchronization 691:Cache coherence 681:Multiprocessing 669: 633: 614:Cost efficiency 609:Gustafson's law 577: 521: 470: 446:Multiprocessing 436:Cloud computing 409: 404: 374: 373: 363: 361: 360:on May 14, 2011 346: 342: 335: 319: 315: 310: 301: 297: 283: 269: 258: 250: 249: 246: 243: 240: 237: 234: 231: 228: 225: 222: 208: 128: 117: 111: 108: 65: 63: 53: 41: 28: 23: 22: 15: 12: 11: 5: 1230: 1220: 1219: 1214: 1198: 1197: 1190: 1183: 1175: 1172: 1171: 1154: 1140: 1139: 1137: 1136: 1124: 1121: 1120: 1118: 1117: 1112: 1107: 1102: 1100:Race condition 1097: 1092: 1087: 1082: 1077: 1071: 1069: 1065: 1064: 1062: 1061: 1056: 1051: 1046: 1041: 1036: 1031: 1026: 1021: 1016: 1011: 1006: 1001: 996: 991: 986: 981: 976: 971: 966: 961: 956: 951: 946: 941: 935: 933: 927: 926: 924: 923: 918: 913: 912: 911: 901: 895: 894: 893: 888: 883: 878: 873: 868: 858: 857: 856: 851: 844:Multiprocessor 841: 836: 831: 826: 821: 820: 819: 814: 809: 808: 807: 802: 797: 786: 775: 773: 767: 766: 764: 763: 758: 757: 756: 751: 746: 736: 731: 725: 723: 717: 716: 714: 713: 708: 703: 698: 693: 688: 683: 677: 675: 671: 670: 668: 667: 662: 657: 652: 647: 641: 639: 635: 634: 632: 631: 626: 621: 616: 611: 606: 601: 596: 591: 585: 583: 579: 578: 576: 575: 573:Hardware scout 570: 564: 559: 554: 548: 543: 537: 531: 529: 527:Multithreading 523: 522: 520: 519: 514: 509: 504: 499: 494: 489: 484: 478: 476: 472: 471: 469: 468: 466:Systolic array 463: 458: 453: 448: 443: 438: 433: 428: 423: 417: 415: 411: 410: 403: 402: 395: 388: 380: 372: 371: 354:Tutorial of Oz 350:"Introduction" 340: 333: 312: 311: 309: 306: 282: 279: 257: 254: 221: 207: 204: 130: 129: 44: 42: 35: 26: 9: 6: 4: 3: 2: 1229: 1218: 1215: 1213: 1210: 1209: 1207: 1196: 1191: 1189: 1184: 1182: 1177: 1176: 1170: 1168: 1164: 1161:article is a 1160: 1155: 1152: 1148: 1147: 1135: 1126: 1125: 1122: 1116: 1113: 1111: 1108: 1106: 1103: 1101: 1098: 1096: 1093: 1091: 1088: 1086: 1083: 1081: 1078: 1076: 1073: 1072: 1070: 1066: 1060: 1057: 1055: 1052: 1050: 1047: 1045: 1042: 1040: 1037: 1035: 1032: 1030: 1027: 1025: 1022: 1020: 1017: 1015: 1012: 1010: 1007: 1005: 1002: 1000: 997: 995: 992: 990: 989:Global Arrays 987: 985: 982: 980: 977: 975: 972: 970: 967: 965: 962: 960: 957: 955: 952: 950: 947: 945: 942: 940: 937: 936: 934: 932: 928: 922: 919: 917: 916:Grid computer 914: 910: 907: 906: 905: 902: 899: 896: 892: 889: 887: 884: 882: 879: 877: 874: 872: 869: 867: 864: 863: 862: 859: 855: 852: 850: 847: 846: 845: 842: 840: 837: 835: 832: 830: 827: 825: 822: 818: 815: 813: 810: 806: 803: 801: 798: 795: 792: 791: 790: 787: 785: 782: 781: 780: 777: 776: 774: 772: 768: 762: 759: 755: 752: 750: 747: 745: 742: 741: 740: 737: 735: 732: 730: 727: 726: 724: 722: 718: 712: 709: 707: 704: 702: 699: 697: 694: 692: 689: 687: 684: 682: 679: 678: 676: 672: 666: 663: 661: 658: 656: 653: 651: 648: 646: 643: 642: 640: 636: 630: 627: 625: 622: 620: 617: 615: 612: 610: 607: 605: 602: 600: 597: 595: 592: 590: 587: 586: 584: 580: 574: 571: 568: 565: 563: 560: 558: 555: 552: 549: 547: 544: 541: 538: 536: 533: 532: 530: 528: 524: 518: 515: 513: 510: 508: 505: 503: 500: 498: 495: 493: 490: 488: 485: 483: 480: 479: 477: 473: 467: 464: 462: 459: 457: 454: 452: 449: 447: 444: 442: 439: 437: 434: 432: 429: 427: 424: 422: 419: 418: 416: 412: 408: 401: 396: 394: 389: 387: 382: 381: 378: 364:September 20, 359: 355: 351: 344: 336: 330: 326: 325: 317: 313: 305: 294: 292: 288: 281:Disadvantages 278: 276: 267: 262: 253: 219: 217: 213: 203: 201: 197: 193: 189: 185: 184:MATLAB M-code 181: 177: 173: 169: 165: 160: 158: 153: 149: 145: 141: 137: 126: 123: 115: 112:February 2024 104: 101: 97: 94: 90: 87: 83: 80: 76: 73: –  72: 68: 67:Find sources: 61: 57: 51: 50: 45:This article 43: 39: 34: 33: 30: 19: 1167:expanding it 1156: 743: 674:Coordination 604:Amdahl's law 540:Simultaneous 362:. Retrieved 358:the original 353: 343: 322: 316: 295: 284: 263: 259: 251: 209: 161: 139: 133: 118: 109: 99: 92: 85: 78: 66: 54:Please help 49:verification 46: 29: 1110:Scalability 871:distributed 754:Concurrency 721:Programming 562:Cooperative 551:Speculative 487:Instruction 152:parallelism 148:interpreter 1206:Categories 1115:Starvation 854:asymmetric 589:PRAM model 557:Preemptive 256:Advantages 202:, and pH. 82:newspapers 849:symmetric 594:PEM model 216:logarithm 1080:Deadlock 1068:Problems 1034:pthreads 1014:OpenHMPP 939:Ateji PX 900:computer 771:Hardware 638:Elements 624:Slowdown 535:Temporal 517:Pipeline 275:for loop 144:compiler 1039:RaftLib 1019:OpenACC 994:GPUOpen 984:C++ AMP 959:Charm++ 701:Barrier 645:Process 629:Speedup 414:General 298:Write() 268:in the 244:numbers 223:numbers 206:Example 180:LabVIEW 96:scholar 1132:  1009:OpenCL 1004:OpenMP 949:Chapel 866:shared 861:Memory 796:(SIMT) 739:Models 650:Thread 582:Theory 553:(SpMT) 507:Memory 492:Thread 475:Levels 331:  302:Seek() 232:result 98:  91:  84:  77:  69:  1157:This 979:Dryad 944:Boost 665:Array 655:Fiber 569:(CMT) 542:(SMT) 456:GPGPU 308:Notes 270:sin() 196:SISAL 168:BMDFM 103:JSTOR 89:books 1163:stub 1044:ROCm 974:CUDA 964:Cilk 931:APIs 891:COMA 886:NUMA 817:MIMD 812:MISD 789:SIMD 784:SISD 512:Loop 502:Data 497:Task 366:2007 329:ISBN 300:and 212:sine 188:NESL 164:Axum 75:news 1059:ZPL 1054:TBB 1049:UPC 1029:PVM 999:MPI 954:HPX 881:UMA 482:Bit 238:sin 214:or 200:ZPL 192:SaC 172:HPF 146:or 138:, 134:In 58:by 1208:: 352:. 247:); 198:, 194:, 190:, 186:, 182:, 178:, 176:Id 174:, 170:, 166:, 159:. 1194:e 1187:t 1180:v 1169:. 399:e 392:t 385:v 368:. 337:. 241:( 235:= 229:; 226:= 125:) 119:( 114:) 110:( 100:· 93:· 86:· 79:· 52:. 20:)

Index

Implicit parallelization

verification
improve this article
adding citations to reliable sources
"Implicit parallelism"
news
newspapers
books
scholar
JSTOR
Learn how and when to remove this message
computer science
compiler
interpreter
parallelism
explicit parallelism
Axum
BMDFM
HPF
Id
LabVIEW
MATLAB M-code
NESL
SaC
SISAL
ZPL
sine
logarithm
list comprehension

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.