20:
131:
introduced the
Multilinear PCA terminology as a way to better differentiate between multilinear tensor decompositions that computed 2nd order statistics associated with each data tensor mode, and subsequent work on Multilinear Independent Component Analysis that computed higher order statistics for
109:
Multilinear subspace learning can be applied to observations whose measurements were vectorized and organized into a data tensor for causally aware dimensionality reduction. These methods may also be employed in reducing horizontal and vertical redundancies irrespective of the causal factors when
40:
that contains a collection of observations have been vectorized, or observations that are treated as matrices and concatenated into a data tensor. Here are some examples of data tensors whose observations are vectorized or whose observations are matrices concatenated into data tensor
105:
learning algorithms are traditional dimensionality reduction techniques that are well suited for datasets that are the result of varying a single causal factor. Unfortunately, they often become inadequate when dealing with datasets that are the result of multiple causal factors. .
230:
A TVP is a direct projection of a high-dimensional tensor to a low-dimensional vector, which is also referred to as the rank-one projections. As TVP projects a tensor to a vector, it can be viewed as multiple projections from a tensor to a scalar. Thus, the TVP of a tensor to a
64:
is a multilinear projection. When observations are retained in the same organizational structure as matrices or higher order tensors, their representations are computed by performing linear projections into the column space, row space and fiber space.
654:"TensorTextures: Multilinear Image-Based Rendering", M. A. O. Vasilescu and D. Terzopoulos, Proc. ACM SIGGRAPH 2004 Conference Los Angeles, CA, August, 2004, in Computer Graphics Proceedings, Annual Conference Series, 2004, 336–342.
144:
243:
unit projection vectors. It is the projection of a tensor on a single line (resulting a scalar), with one projection vector in each mode. Thus, the TVP of a tensor object to a vector in a
239:
projections from the tensor to a scalar. The projection from a tensor to a scalar is an elementary multilinear projection (EMP). In EMP, a tensor is projected to a point through
606:
99:
Multilinear methods may be causal in nature and perform causal inference, or they may be simple regression methods from which no causal conclusion are drawn.
569:
470:
433:
643:, "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, June 2005, vol.1, 547–553."
218:
110:
the observations are treated as a "matrix" (ie. a collection of independent column/row observations) and concatenated into a tensor.
723:," Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013), Beijing, China, August 3–9, 2013.
124:
452:, "Proceedings of International Conference on Pattern Recognition (ICPR 2002), Vol. 3, Quebec City, Canada, Aug, 2002, 456–460."
299:
504:
878:
23:
A video or an image sequence represented as a third-order tensor of column x row x time for multilinear subspace learning.
272:
sets of parameters to be solved, one in each mode. The solution to one set often depends on the other sets (except when
30:
is an approach for disentangling the causal factor of data formation and performing dimensionality reduction. The
754:
420:, "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"
127:
has been referred to as "M-mode PCA", a terminology which was coined by Peter
Kroonenberg. In 2005, Vasilescu and
695:
Uncorrelated multilinear discriminant analysis with regularization and aggregation for tensor object recognition
201:
A TTP is a direct projection of a high-dimensional tensor to a low-dimensional tensor of the same order, using
185:
148:
80:
733:
Khan, Suleiman A.; Kaski, Samuel (2014-09-15). "Bayesian Multi-view Tensor
Factorization". In Calders, Toon;
283:
For each mode, fixing the projection in all the other mode, and solve for the projection in the current mode.
57:
876:
Foundations of the PARAFAC procedure: Models and conditions for an "explanatory" multi-modal factor analysis
834:
J. D. Carroll & J. Chang (1970). "Analysis of individual differences in multidimensional scaling via an
918:
913:
449:
303:
161:
133:
84:
76:
653:
524:
335:
707:
343:
339:
720:
499:. Chapman & Hall/CRC Press Machine Learning and Pattern Recognition Series. Taylor and Francis.
741:. Lecture Notes in Computer Science. Vol. 8724. Springer Berlin Heidelberg. pp. 656–671.
623:
Principal component analysis of three-mode data by means of alternating least squares algorithms
708:
Canonical correlation analysis of video volume tensors for action categorization and detection
565:
436:, Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002
88:
50:
315:
309:
19:
539:
391:
381:
321:
222:
684:," IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 10, pp. 1700–1715, October 2007.
8:
417:
366:
361:
290:
This is originated from the alternating least square method for multi-way data analysis.
128:
31:
896:
On the best rank-1 and rank-(R1, R2, ..., RN ) approximation of higher-order tensors
543:
857:
816:
582:
776:, SIAM Journal of Matrix Analysis and Applications vol. 21, no. 4, pp. 1253–1278, 2000
808:
750:
734:
721:
Learning
Canonical Correlations of Paired Tensor Sets via Tensor-to-Vector Projection
500:
494:
861:
820:
923:
849:
800:
786:
742:
547:
474:
356:
252:
710:," IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 8, pp. 1415–1428, 2009.
276:, the linear case). Therefore, the suboptimal iterative procedure in is followed.
882:
746:
640:
551:
386:
371:
102:
72:
602:
496:
Multilinear
Subspace Learning: Dimensionality Reduction of Multidimensional Data
895:
875:
773:
467:
Multilinear
Projection for Appearance-Based Recognition in the Tensor Framework
694:
478:
213:
steps with each step performing a tensor-matrix multiplication (product). The
907:
840:
791:
681:
668:
682:
General tensor discriminant analysis and gabor features for gait recognition
898:, SIAM Journal of Matrix Analysis and Applications 21 (4) (2000) 1324–1342.
789:(September 1966). "Some mathematical notes on three-mode factor analysis".
61:
812:
697:," IEEE Trans. Neural Netw., vol. 20, no. 1, pp. 103–123, January 2009.
286:
Do the mode-wise optimization for a few iterations or until convergence.
853:
804:
622:
671:," IEEE Trans. Neural Netw., vol. 19, no. 1, pp. 18–39, January 2008.
221:(HOSVD) to subspace learning. Hence, its origin is traced back to the
196:
TVP-based: Bayesian
Multilinear Canonical Correlation Analysis (BMTF)
166:
TTP-based: Discriminant
Analysis with Tensor Representation (DATER)
256:
172:
TVP-based: Uncorrelated
Multilinear Discriminant Analysis (UMLDA)
669:
MPCA: Multilinear principal component analysis of tensor objects
376:
217:
steps are exchangeable. This projection is an extension of the
37:
523:
Lu, Haiping; Plataniotis, K.N.; Venetsanopoulos, A.N. (2011).
493:
Lu, Haiping; Plataniotis, K.N.; Venetsanopoulos, A.N. (2013).
601:
S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.-J. Zhang, "
178:
139:
46:
42:
833:
583:"Future Directions in Tensor-Based Computation and Modeling"
522:
492:
193:
TVP-based: Multilinear
Canonical Correlation Analysis (MCCA)
525:"A Survey of Multilinear Subspace Learning for Tensor Data"
450:"Human Motion Signatures: Analysis, Synthesis, Recognition"
154:
118:
607:
IEEE Conference on Computer Vision and Pattern Recognition
428:
426:
310:
The MPCA algorithm written in Matlab (MPCA+LDA included)
885:. UCLA Working Papers in Phonetics, 16, pp. 1–84, 1970.
423:
190:
TTP-based: Tensor Canonical Correlation Analysis (TCCA)
838:-way generalization of 'Eckart–Young' decomposition".
693:
H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "
667:
H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "
434:"Multilinear Analysis of Image Ensembles: TensorFaces"
169:
TTP-based: General tensor discriminant analysis (GTDA)
739:
Machine Learning and Knowledge Discovery in Databases
322:
The UMLDA algorithm written in Matlab (data included)
316:
The UMPCA algorithm written in Matlab (data included)
894:L. D. Lathauwer, B. D. Moor, J. Vandewalle,
464:
617:
615:
570:Advances in Neural Information Processing Systemsc
444:
442:
418:"Multilinear Subspace Analysis of Image Ensembles"
905:
785:
772:L.D. Lathauwer, B.D. Moor, J. Vandewalle,
635:
633:
631:
603:Discriminant analysis with tensor representation
612:
460:
458:
439:
412:
410:
408:
406:
663:
661:
280:Initialization of the projections in each mode
774:A multilinear singular value decomposition
646:
628:
251:EMPs. This projection is an extension of the
641:"Multilinear Independent Component Analysis"
455:
403:
658:
518:
516:
471:International Conference on Computer Vision
465:Vasilescu, M.A.O.; Terzopoulos, D. (2007).
680:D. Tao, X. Li, X. Wu, and S. J. Maybank, "
639:M. A. O. Vasilescu, D. Terzopoulos (2005)
432:M. A. O. Vasilescu, D. Terzopoulos (2002)
416:M. A. O. Vasilescu, D. Terzopoulos (2003)
263:
179:Multilinear canonical correlation analysis
145:Multilinear independent component analysis
140:Multilinear independent component analysis
132:each tensor mode. MPCA is an extension of
732:
219:higher-order singular value decomposition
768:
766:
652:M.A.O. Vasilescu, D. Terzopoulos (2004)
513:
372:Multilinear Principal Component Analysis
209:th-order tensor. It can be performed in
155:Multilinear linear discriminant analysis
125:multilinear principal component analysis
119:Multilinear principal component analysis
69:Multilinear subspace learning algorithms
18:
737:; Hüllermeier, Eyke; Meo, Rosa (eds.).
597:
595:
906:
625:, Psychometrika, 45 (1980), pp. 69–97.
558:
486:
247:-dimensional vector space consists of
763:
592:
334:3D gait data (third-order tensors):
71:are higher-order generalizations of
16:Approach to dimensionality reduction
621:P. M. Kroonenberg and J. de Leeuw,
328:
13:
14:
935:
609:, vol. I, June 2005, pp. 526–532.
235:-dimensional vector consists of
888:
868:
827:
779:
726:
713:
700:
687:
674:
68:
575:
89:canonical correlation analysis
81:independent component analysis
60:to a set of lower dimensional
1:
397:
113:
94:
58:high-dimensional vector space
28:Multilinear subspace learning
747:10.1007/978-3-662-44848-9_42
552:10.1016/j.patcog.2011.01.004
304:Sandia National Laboratories
85:linear discriminant analysis
77:principal component analysis
7:
706:T.-K. Kim and R. Cipolla. "
350:
205:projection matrices for an
36:can be performed on a data
10:
940:
564:X. He, D. Cai, P. Niyogi,
448:M. A. O. Vasilescu,(2002)
479:10.1109/ICCV.2007.4409067
184:Multilinear extension of
160:Multilinear extension of
75:learning methods such as
566:Tensor subspace analysis
259:(PARAFAC) decomposition.
33:Dimensionality reduction
293:
264:Typical approach in MSL
253:canonical decomposition
49:sequences (3D/4D), and
24:
300:MATLAB Tensor Toolbox
22:
392:Tucker decomposition
382:Tensor decomposition
255:, also known as the
223:Tucker decomposition
919:Multilinear algebra
914:Dimension reduction
544:2011PatRe..44.1540L
532:Pattern Recognition
367:Multilinear algebra
362:Dimension reduction
147:is an extension of
56:The mapping from a
51:hyperspectral cubes
881:2004-10-10 at the
854:10.1007/BF02310791
805:10.1007/BF02289464
735:Esposito, Floriana
25:
506:978-1-4398572-4-3
931:
899:
892:
886:
874:R. A. Harshman,
872:
866:
865:
831:
825:
824:
787:Ledyard R Tucker
783:
777:
770:
761:
760:
730:
724:
717:
711:
704:
698:
691:
685:
678:
672:
665:
656:
650:
644:
637:
626:
619:
610:
599:
590:
589:
587:
579:
573:
572:18 (NIPS), 2005.
562:
556:
555:
538:(7): 1540–1551.
529:
520:
511:
510:
490:
484:
482:
473:. pp. 1–8.
462:
453:
446:
437:
430:
421:
414:
357:CP decomposition
336:128x88x20(21.2M)
329:Tensor data sets
257:parallel factors
939:
938:
934:
933:
932:
930:
929:
928:
904:
903:
902:
893:
889:
883:Wayback Machine
873:
869:
832:
828:
784:
780:
771:
764:
757:
731:
727:
718:
714:
705:
701:
692:
688:
679:
675:
666:
659:
651:
647:
638:
629:
620:
613:
600:
593:
585:
581:
580:
576:
563:
559:
527:
521:
514:
507:
491:
487:
463:
456:
447:
440:
431:
424:
415:
404:
400:
387:Tensor software
353:
331:
296:
266:
181:
157:
142:
121:
116:
103:Linear subspace
97:
73:linear subspace
17:
12:
11:
5:
937:
927:
926:
921:
916:
901:
900:
887:
867:
848:(3): 283–319.
826:
799:(3): 279–311.
778:
762:
755:
725:
712:
699:
686:
673:
657:
645:
627:
611:
591:
574:
557:
512:
505:
485:
454:
438:
422:
401:
399:
396:
395:
394:
389:
384:
379:
374:
369:
364:
359:
352:
349:
348:
347:
344:32x22x10(3.2M)
340:64x44x20(9.9M)
330:
327:
326:
325:
319:
313:
307:
295:
292:
288:
287:
284:
281:
265:
262:
261:
260:
227:
226:
199:
198:
197:
194:
191:
180:
177:
176:
175:
174:
173:
170:
167:
156:
153:
141:
138:
123:Historically,
120:
117:
115:
112:
96:
93:
15:
9:
6:
4:
3:
2:
936:
925:
922:
920:
917:
915:
912:
911:
909:
897:
891:
884:
880:
877:
871:
863:
859:
855:
851:
847:
843:
842:
841:Psychometrika
837:
830:
822:
818:
814:
810:
806:
802:
798:
794:
793:
792:Psychometrika
788:
782:
775:
769:
767:
758:
756:9783662448472
752:
748:
744:
740:
736:
729:
722:
716:
709:
703:
696:
690:
683:
677:
670:
664:
662:
655:
649:
642:
636:
634:
632:
624:
618:
616:
608:
604:
598:
596:
584:
578:
571:
567:
561:
553:
549:
545:
541:
537:
533:
526:
519:
517:
508:
502:
498:
497:
489:
480:
476:
472:
468:
461:
459:
451:
445:
443:
435:
429:
427:
419:
413:
411:
409:
407:
402:
393:
390:
388:
385:
383:
380:
378:
375:
373:
370:
368:
365:
363:
360:
358:
355:
354:
345:
341:
337:
333:
332:
323:
320:
317:
314:
311:
308:
305:
301:
298:
297:
291:
285:
282:
279:
278:
277:
275:
271:
258:
254:
250:
246:
242:
238:
234:
229:
228:
224:
220:
216:
212:
208:
204:
200:
195:
192:
189:
188:
187:
183:
182:
171:
168:
165:
164:
163:
159:
158:
152:
150:
146:
137:
135:
130:
126:
111:
107:
104:
100:
92:
90:
86:
82:
78:
74:
70:
66:
63:
62:vector spaces
59:
54:
52:
48:
44:
39:
35:
34:
29:
21:
890:
870:
845:
839:
835:
829:
796:
790:
781:
738:
728:
715:
702:
689:
676:
648:
605:," in Proc.
577:
560:
535:
531:
495:
488:
469:. IEEE 11th
466:
289:
273:
269:
267:
248:
244:
240:
236:
232:
214:
210:
206:
202:
143:
122:
108:
101:
98:
67:
55:
32:
27:
26:
588:. May 2009.
129:Terzopoulos
908:Categories
398:References
268:There are
114:Algorithms
95:Background
87:(LDA) and
719:H. Lu, "
225:in 1960s.
53:(3D/4D).
45:(2D/3D),
879:Archived
862:50364581
821:44301099
351:See also
924:Tensors
813:5221127
540:Bibcode
91:(CCA).
83:(ICA),
79:(PCA),
860:
819:
811:
753:
568:, in:
503:
377:Tensor
43:images
38:tensor
858:S2CID
817:S2CID
586:(PDF)
528:(PDF)
47:video
809:PMID
751:ISBN
501:ISBN
294:Code
850:doi
801:doi
743:doi
548:doi
483:.
475:doi
302:by
274:N=1
186:CCA
162:LDA
149:ICA
134:PCA
910::
856:.
846:35
844:.
815:.
807:.
797:31
795:.
765:^
749:.
660:^
630:^
614:^
594:^
546:.
536:44
534:.
530:.
515:^
457:^
441:^
425:^
405:^
342:;
338:;
151:.
136:.
864:.
852::
836:n
823:.
803::
759:.
745::
554:.
550::
542::
509:.
481:.
477::
346:;
324:.
318:.
312:.
306:.
270:N
249:P
245:P
241:N
237:P
233:P
215:N
211:N
207:N
203:N
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.