Knowledge

Polar code (coding theory)

Source 📝

34:. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the 488: 157:
decoding: the embedding (E) NN, the check-node (F) NN, the bit-node (G) NN, and the embedding-to-LLR (H) NN. The weights of these NNs are determined by estimating the mutual information of the synthetic channels. By the end of training, the weights of the NPD are fixed and can then be used for decoding.
160:
The computational complexity of NPDs is determined by the parameterization of the neural networks, unlike successive cancellation (SC) trellis decoders, whose complexity is determined by the channel model and are typically used for finite-state channels (FSCs). The computational complexity of NPDs is
139:
suggested to employ a convolutional pre-transformation before polar coding. These pre-transformed variant of polar codes were dubbed polarization-adjusted convolutional (PAC) codes. It was shown that the pre-transformation can effectively improve the distance properties of polar codes by reducing the
368:
NPDs can be integrated into SC decoding schemes such as SC list decoding and CRC-aided SC decoding. They are also compatible with non-uniform and i.i.d. input distributions by integrating them into the Honda-Yamamoto scheme. This flexibility allows NPDs to be used in various decoding scenarios,
156:
Neural Polar Decoders (NPDs) are an advancement in channel coding that combine neural networks (NNs) with polar codes, providing unified decoding for channels with or without memory, without requiring an explicit channel model. They use four neural networks to approximate the functions of polar
95:
Polar codes have some limitations when used in industrial applications. Primarily, the original design of the polar codes achieves capacity when block sizes are asymptotically large with a successive cancellation decoder. However, with the block sizes used in industry, the performance of the
68:, which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can reach the fundamental lower bounds for energy consumption of two dimensional circuitry to within an 104:. Polar performance can be improved with successive cancellation list decoding, but its usability in real applications is still questionable due to very poor implementation efficiencies caused by the iterative approach. 140:
number of minimum-weight and in general small-weight codewords, resulting in the improvement of block error rates under near maximum likelihood (ML) decoding algorithm such as Fano decoding and list decoding . At
126:
agreed to adopt polar codes for the eMBB (Enhanced Mobile Broadband) control channels for the 5G NR (New Radio) interface. At the same meeting, 3GPP agreed to use LDPC for the corresponding data channel.
339: 141: 363: 210: 115:
field trial tests using polar codes for channel coding. The improvements have been introduced so that the channel performance has now almost closed the gap to the
690:
Aharoni, Ziv; Huleihel, Bashar; Pfister, Henry D.; Permuter, Haim H. (2023-09-06). "Data-Driven Neural Polar Codes for Unknown Channels With and Without Memory".
270: 250: 230: 467:
Arikan, Erdal; Najeeb ul Hassan; Lentmaier, Michael; Montorsi, Guido; Sayir, Jossy (2015). "Challenges and some new directions in channel coding".
596:
Moradi, Mohsen; Mozammel, Amir; Qin, Kangjian; Arikan, Erdal (2020). "Performance and Complexity of Sequential Decoding of PAC Codes".
438: 508: 853: 578:
M. Rowshan, A. Burg and E. Viterbo, "Polarization-Adjusted Convolutional (PAC) Codes: Sequential Decoding vs List Decoding," in
732: 868: 390:"Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels" 275: 863: 529:
M. Rowshan, M. Qiu, Y. Xie, X. Gu and J. Yuan, "Channel Coding Toward 6G: Technical Overview and Outlook," in
561:
M. Rowshan and J. Yuan, "On the Minimum Weight Codewords of PAC Codes: The Impact of Pre-Transformation," in
97: 839:
AFF3CT home page: A Fast Forward Error Correction Toolbox for high speed polar code simulations in software
35: 344: 164: 42:
channels (B-DMC) with polynomial dependence on the gap to capacity. Polar codes were developed by
96:
successive cancellation is poor compared to well-defined and implemented coding schemes such as
858: 31: 369:
improving error correction performance while maintaining manageable computational complexity.
446: 644: 8: 566: 534: 272:
is the block length. In contrast, the computational complexity of SC trellis decoders is
119:, which sets the bar for the maximum rate for a given bandwidth and a given noise level. 648: 583: 761: 691: 667: 634: 622: 597: 468: 401: 255: 235: 215: 145: 47: 20: 818: 779: 728: 672: 419: 28: 546:
E. Arıkan, "From sequential decoding to channel polarization and back again", 2019,
810: 771: 720: 662: 652: 411: 618: 136: 43: 798: 749: 712: 389: 39: 724: 847: 822: 814: 783: 775: 711:
Wang, Runxin; Honda, Junya; Yamamoto, Hirosuke; Liu, Rongke; Hou, Yi (2015).
549: 423: 415: 116: 676: 466: 101: 657: 696: 639: 602: 473: 766: 406: 53:
Notably, polar codes have modest encoding and decoding complexity
799:"Polar Coding Without Alphabet Extension for Asymmetric Models" 108: 689: 838: 123: 148:
and CRC-aided list decoding of conventional polar codes.
595: 112: 713:"Construction of polar codes for channels with memory" 232:
is the number of hidden units in the neural networks,
167: 439:"Energy Consumption of Error Control Coding Circuits" 347: 278: 258: 238: 218: 710: 563:
IEEE Journal on Selected Areas in Information Theory
717:2015 IEEE Information Theory Workshop - Fall (ITW) 489:"Huawei achieves 27Gbps 5G speeds with Polar Code" 357: 334:{\displaystyle O(|{\mathcal {S}}|^{3}N\log _{2}N)} 333: 264: 244: 224: 204: 582:, vol. 70, no. 2, pp. 1434-1447, Feb. 2021, doi: 111:announced that it had achieved 27 Gbit/s in 845: 531:IEEE Open Journal of the Communications Society 616: 796: 46:, a professor of electrical engineering at 90: 797:Honda, Junya; Yamamoto, Hirosuke (2013). 765: 695: 666: 656: 638: 601: 580:IEEE Transactions on Vehicular Technology 472: 405: 365:is the state space of the channel model. 747: 151: 803:IEEE Transactions on Information Theory 754:IEEE Transactions on Information Theory 394:IEEE Transactions on Information Theory 252:is the dimension of the embedding, and 846: 387: 38:for symmetric binary-input, discrete, 623:"List Decoding of Arıkan's PAC Codes" 436: 533:, vol. 5, pp. 2585-2685, 2024, doi: 509:"3GPP RAN1 meeting #87 final report" 383: 381: 748:Tal, Ido; Vardy, Alexander (2015). 13: 589: 565:, vol. 4, pp. 487-498, 2023, doi: 350: 292: 14: 880: 832: 501: 481: 460: 378: 790: 741: 704: 683: 610: 854:Error detection and correction 750:"List Decoding of Polar Codes" 572: 555: 540: 523: 437:Blake, Christopher G. (2017). 430: 358:{\displaystyle {\mathcal {S}}} 328: 299: 286: 282: 199: 171: 1: 372: 205:{\textstyle O(kdN\log _{2}N)} 144:, such codes outperform both 98:low-density parity-check code 16:Type of error correcting code 617:Yao, Hanwen; Fazeli, Arman; 130: 7: 535:10.1109/OJCOMS.2024.3390000 10: 885: 869:Capacity-approaching codes 719:. IEEE. pp. 187–191. 567:10.1109/JSAIT.2023.3312678 725:10.1109/ITWF.2015.7360760 864:Capacity-achieving codes 815:10.1109/TIT.2013.2282305 776:10.1109/TIT.2015.2410251 584:10.1109/TVT.2021.3052550 416:10.1109/TIT.2009.2021379 91:Industrial applications 388:Arikan, Erdal (2009). 359: 335: 266: 246: 226: 206: 32:error-correcting codes 447:University of Toronto 360: 336: 267: 247: 227: 207: 152:Neural Polar Decoders 345: 276: 256: 236: 216: 165: 649:2021Entrp..23..841Y 146:convolutional codes 355: 331: 262: 242: 222: 202: 142:short blocklengths 122:In November 2016, 48:Bilkent University 21:information theory 809:(12): 7829–7838. 734:978-1-4673-7852-9 658:10.3390/e23070841 265:{\displaystyle N} 245:{\displaystyle d} 225:{\displaystyle k} 107:In October 2016, 876: 827: 826: 794: 788: 787: 769: 760:(5): 2213–2226. 745: 739: 738: 708: 702: 701: 699: 687: 681: 680: 670: 660: 642: 619:Vardy, Alexander 614: 608: 607: 605: 593: 587: 576: 570: 559: 553: 544: 538: 527: 521: 520: 518: 516: 505: 499: 498: 496: 495: 485: 479: 478: 476: 464: 458: 457: 455: 454: 443: 434: 428: 427: 409: 400:(7): 3051–3073. 385: 364: 362: 361: 356: 354: 353: 340: 338: 337: 332: 321: 320: 308: 307: 302: 296: 295: 289: 271: 269: 268: 263: 251: 249: 248: 243: 231: 229: 228: 223: 211: 209: 208: 203: 192: 191: 86: 82: 67: 36:channel capacity 884: 883: 879: 878: 877: 875: 874: 873: 844: 843: 835: 830: 795: 791: 746: 742: 735: 709: 705: 688: 684: 615: 611: 594: 590: 577: 573: 560: 556: 545: 541: 528: 524: 514: 512: 507: 506: 502: 493: 491: 487: 486: 482: 465: 461: 452: 450: 441: 435: 431: 386: 379: 375: 349: 348: 346: 343: 342: 316: 312: 303: 298: 297: 291: 290: 285: 277: 274: 273: 257: 254: 253: 237: 234: 233: 217: 214: 213: 187: 183: 166: 163: 162: 154: 133: 93: 84: 83:factor for any 69: 54: 17: 12: 11: 5: 882: 872: 871: 866: 861: 856: 842: 841: 834: 833:External links 831: 829: 828: 789: 740: 733: 703: 682: 609: 588: 571: 554: 539: 522: 500: 480: 459: 429: 376: 374: 371: 352: 330: 327: 324: 319: 315: 311: 306: 301: 294: 288: 284: 281: 261: 241: 221: 201: 198: 195: 190: 186: 182: 179: 176: 173: 170: 153: 150: 132: 129: 92: 89: 15: 9: 6: 4: 3: 2: 881: 870: 867: 865: 862: 860: 859:Coding theory 857: 855: 852: 851: 849: 840: 837: 836: 824: 820: 816: 812: 808: 804: 800: 793: 785: 781: 777: 773: 768: 763: 759: 755: 751: 744: 736: 730: 726: 722: 718: 714: 707: 698: 693: 686: 678: 674: 669: 664: 659: 654: 650: 646: 641: 636: 632: 628: 624: 620: 613: 604: 599: 592: 585: 581: 575: 568: 564: 558: 552: 551: 543: 536: 532: 526: 510: 504: 490: 484: 475: 470: 463: 449: 448: 440: 433: 425: 421: 417: 413: 408: 403: 399: 395: 391: 384: 382: 377: 370: 366: 325: 322: 317: 313: 309: 304: 279: 259: 239: 219: 196: 193: 188: 184: 180: 177: 174: 168: 158: 149: 147: 143: 138: 128: 125: 120: 118: 117:Shannon limit 114: 110: 105: 103: 99: 88: 80: 76: 72: 65: 61: 57: 51: 49: 45: 41: 37: 33: 30: 26: 22: 806: 802: 792: 757: 753: 743: 716: 706: 685: 630: 626: 612: 591: 579: 574: 562: 557: 547: 542: 530: 525: 513:. Retrieved 503: 492:. Retrieved 483: 462: 451:. Retrieved 445: 432: 397: 393: 367: 159: 155: 134: 121: 106: 94: 78: 74: 70: 63: 59: 55: 52: 44:Erdal Arikan 29:linear block 24: 18: 100:(LDPC) and 25:polar codes 848:Categories 697:2309.03148 640:2005.13711 633:(7): 841. 603:2012.04990 550:1908.09594 494:2016-10-10 474:1504.03916 453:2019-10-18 373:References 102:turbo code 40:memoryless 823:0018-9448 784:0018-9448 767:1206.0050 515:31 August 424:0018-9448 407:0807.3917 323:⁡ 194:⁡ 135:In 2019, 131:PAC codes 677:34209050 621:(2021). 341:, where 212:, where 85:ε > 0 77:polylog 668:8303677 645:Bibcode 627:Entropy 821:  782:  731:  675:  665:  548:arXiv: 511:. 3GPP 422:  137:Arıkan 109:Huawei 27:are a 762:arXiv 692:arXiv 635:arXiv 598:arXiv 469:arXiv 442:(PDF) 402:arXiv 819:ISSN 780:ISSN 729:ISBN 673:PMID 517:2017 420:ISSN 124:3GPP 62:log 811:doi 772:doi 721:doi 663:PMC 653:doi 412:doi 314:log 185:log 19:In 850:: 817:. 807:59 805:. 801:. 778:. 770:. 758:61 756:. 752:. 727:. 715:. 671:. 661:. 651:. 643:. 631:23 629:. 625:. 537:. 444:. 418:. 410:. 398:55 396:. 392:. 380:^ 113:5G 87:. 50:. 23:, 825:. 813:: 786:. 774:: 764:: 737:. 723:: 700:. 694:: 679:. 655:: 647:: 637:: 606:. 600:: 586:. 569:. 519:. 497:. 477:. 471:: 456:. 426:. 414:: 404:: 351:S 329:) 326:N 318:2 310:N 305:3 300:| 293:S 287:| 283:( 280:O 260:N 240:d 220:k 200:) 197:N 189:2 181:N 178:d 175:k 172:( 169:O 81:) 79:n 75:n 73:( 71:O 66:) 64:n 60:n 58:( 56:O

Index

information theory
linear block
error-correcting codes
channel capacity
memoryless
Erdal Arikan
Bilkent University
low-density parity-check code
turbo code
Huawei
5G
Shannon limit
3GPP
Arıkan
short blocklengths
convolutional codes


"Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels"
arXiv
0807.3917
doi
10.1109/TIT.2009.2021379
ISSN
0018-9448
"Energy Consumption of Error Control Coding Circuits"
University of Toronto
arXiv
1504.03916
"Huawei achieves 27Gbps 5G speeds with Polar Code"

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.