Knowledge

Modality (human–computer interaction)

Source 📝

201:, the general public are becoming more comfortable with the more complex modalities. Motion and orientation are commonly used in smartphone mapping applications. Speech recognition is widely used with Virtual Assistant applications. Computer Vision is now common in camera applications that are used to scan documents and QR codes. 43:
if it has more than one. When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities. If multiple modalities are available for a task, the system is said to have redundant modalities. Multiple modalities can be used in combination to
217:
for users who work more effectively with certain modalities. Multiple modalities can be used as backup when certain forms of communication are not possible. This is especially true in the case of redundant modalities in which two or more modalities are used to communicate the same information.
262:
Complementary-redundant systems are those which have multiple sensors to form one understanding or dataset, and the more effectively the information can be combined without duplicating data, the more effectively the modalities cooperate. Having multiple modalities for communication is common,
38:
between a computer and a human. Such channels may differ based on sensory nature (e.g., visual vs. auditory), or other significant differences in processing (e.g., text vs. image). A system is designated unimodal if it has only one modality implemented, and
218:
Certain combinations of modalities can add to the expression of a computer-human or human-computer interaction because the modalities each may be more effective at expressing one form or aspect of information than others.
44:
provide complementary methods that may be redundant but convey information more effectively. Modalities can be generally defined in two forms: computer-human and human-computer modalities.
147:
practical adjustments to the user. Certain modalities can provide a richer interaction depending on the context, and having options for implementation allows for more robust systems.
143:
and sensors to allow them to receive information from humans. Common input devices are often interchangeable if they have a standardized method of communication with the computer and
263:
particularly in smartphones, and often their implementations work together towards the same goal, for example gyroscopes and accelerometers working together to track movement.
221:
There are six types of cooperation between modalities, and they help define how a combination or fusion of modalities work together to convey information more effectively.
449:
Williams, J. R. (1998). Guidelines for the use of multimedia in instruction, Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, 1447–1451
123:
are the most commonly employed since they are capable of transmitting information at a higher speed than other modalities, 250 to 300 and 150 to 160
127:, respectively. Though not commonly implemented as computer-human modality, tactition can achieve an average of 125 wpm through the use of a 363:
Jing Yu Koh; Salakhutdinov, Ruslan; Fried, Daniel (2023). "Grounding Language Models to Images for Multimodal Inputs and Outputs".
315: 526: 496: 399: 548: 27: 128: 391: 284: 272: 189: 40: 52:
Computers utilize a wide range of technologies to communicate and send information to humans:
385: 20: 154: 8: 278: 213:
to users and can contribute to a more robust system. Having more also allows for greater
115:
Any human sense can be used as a computer to human modality. However, the modalities of
228:
information is presented in multiple ways and can be interpreted as the same information
364: 314:
Karray, Fakhreddine; Alemzadeh, Milad; Saleh, Jamil Abou; Arab, Mo Nours (March 2008).
179: 131:. Other more common forms of tactition are smartphone and game controller vibrations. 522: 492: 432: 395: 338: 116: 106: 71: 59: 424: 330: 124: 415:
Ziefle, M (December 1998). "Effects of display resolution on visual performance".
516: 486: 234:
when a specific kind of information is always processed through the same modality
174: 159: 120: 65: 428: 290: 275: – Form of human-machine interaction using multiple modes of input/output 542: 334: 214: 184: 140: 94: 35: 436: 198: 164: 100: 210: 144: 88: 369: 293: – Means by which a user interacts with and controls a machine 258:
multiple modalities take in separate information that is not merged
281: – Machine learning methods using multiple input modalities 323:
International Journal on Smart Sensing and Intelligent Systems
252:
a modality produces information that another modality consumes
459: 362: 82: 518:
Multimodal Human Computer Interaction and Pervasive Services
387:
Interactive Systems. Design, Specification, and Verification
316:"Human-Computer Interaction: Overview on State of the Art" 246:
multiple modalities take separate information and merge it
34:
is the classification of a single independent channel of
313: 488:
Berkshire Encyclopedia of Human-computer Interaction
390:. Springer Science & Business Media. pp.  209:Having multiple modalities in a system gives more 540: 240:multiple modalities process the same information 139:Computers can be equipped with various types of 491:. Berkshire Publishing Group LLC. p. 483. 383: 309: 307: 62:– computer graphics typically through a screen 304: 204: 134: 47: 384:Palanque, Philippe; Paterno, Fabio (2001). 484: 287: – Study of senses and nervous system 368: 514: 408: 541: 414: 510: 508: 13: 197:With the increasing popularity of 14: 560: 505: 466:. American Council of the Blind 478: 452: 443: 377: 356: 74:– vibrations or other movement 1: 297: 485:Bainbridge, William (2004). 7: 266: 129:refreshable Braille display 10: 565: 521:. IGI Global. p. 37. 515:Grifoni, Patrizia (2009). 429:10.1518/001872098779649355 28:human–computer interaction 18: 205:Using multiple modalities 135:Human–computer modalities 48:Computer–Human modalities 335:10.21307/ijssis-2017-283 285:Multisensory integration 19:Not to be confused with 68:– various audio outputs 549:Multimodal interaction 273:Multimodal interaction 21:Mode (user interface) 79:Uncommon modalities 279:Multimodal learning 171:Complex modalities 180:Speech recognition 151:Simple modalities 56:Common modalities 26:In the context of 344:on April 30, 2015 107:Equilibrioception 556: 533: 532: 512: 503: 502: 482: 476: 475: 473: 471: 456: 450: 447: 441: 440: 412: 406: 405: 381: 375: 374: 372: 360: 354: 353: 351: 349: 343: 337:. Archived from 320: 311: 244:Complementarity: 125:words per minute 564: 563: 559: 558: 557: 555: 554: 553: 539: 538: 537: 536: 529: 513: 506: 499: 483: 479: 469: 467: 458: 457: 453: 448: 444: 413: 409: 402: 382: 378: 361: 357: 347: 345: 341: 318: 312: 305: 300: 269: 232:Specialization: 207: 175:Computer vision 160:Pointing device 137: 50: 24: 17: 12: 11: 5: 562: 552: 551: 535: 534: 527: 504: 497: 477: 451: 442: 407: 400: 376: 355: 329:(1): 137–159. 302: 301: 299: 296: 295: 294: 291:User interface 288: 282: 276: 268: 265: 260: 259: 253: 247: 241: 235: 229: 206: 203: 195: 194: 193: 192: 187: 182: 177: 169: 168: 167: 162: 157: 136: 133: 113: 112: 111: 110: 104: 98: 92: 86: 77: 76: 75: 69: 63: 49: 46: 15: 9: 6: 4: 3: 2: 561: 550: 547: 546: 544: 530: 528:9781605663876 524: 520: 519: 511: 509: 500: 498:9780974309125 494: 490: 489: 481: 465: 461: 455: 446: 438: 434: 430: 426: 423:(4): 554–68. 422: 418: 417:Human Factors 411: 403: 401:9783540416630 397: 393: 389: 388: 380: 371: 366: 359: 340: 336: 332: 328: 324: 317: 310: 308: 303: 292: 289: 286: 283: 280: 277: 274: 271: 270: 264: 257: 254: 251: 248: 245: 242: 239: 236: 233: 230: 227: 224: 223: 222: 219: 216: 215:accessibility 212: 202: 200: 191: 188: 186: 183: 181: 178: 176: 173: 172: 170: 166: 163: 161: 158: 156: 153: 152: 150: 149: 148: 146: 142: 141:input devices 132: 130: 126: 122: 118: 108: 105: 102: 99: 96: 95:Thermoception 93: 90: 87: 84: 81: 80: 78: 73: 70: 67: 64: 61: 58: 57: 55: 54: 53: 45: 42: 37: 33: 29: 22: 517: 487: 480: 468:. Retrieved 463: 454: 445: 420: 416: 410: 386: 379: 358: 346:. Retrieved 339:the original 326: 322: 261: 256:Concurrency: 255: 249: 243: 237: 231: 226:Equivalence: 225: 220: 208: 196: 138: 114: 51: 36:input/output 31: 25: 16:Type of data 238:Redundancy: 199:smartphones 190:Orientation 165:Touchscreen 101:Nociception 370:2301.13823 298:References 250:Transfer: 211:affordance 41:multimodal 460:"Braille" 348:April 21, 109:(balance) 89:Olfaction 83:Gustation 72:Tactition 543:Category 470:21 April 267:See also 155:Keyboard 66:Audition 32:modality 437:9974229 121:hearing 91:(smell) 85:(taste) 525:  495:  435:  398:  185:Motion 145:afford 117:seeing 103:(pain) 97:(heat) 60:Vision 365:arXiv 342:(PDF) 319:(PDF) 523:ISBN 493:ISBN 472:2015 433:PMID 396:ISBN 350:2015 119:and 30:, a 464:ACB 425:doi 331:doi 545:: 507:^ 462:. 431:. 421:40 419:. 394:. 392:43 325:. 321:. 306:^ 531:. 501:. 474:. 439:. 427:: 404:. 373:. 367:: 352:. 333:: 327:1 23:.

Index

Mode (user interface)
human–computer interaction
input/output
multimodal
Vision
Audition
Tactition
Gustation
Olfaction
Thermoception
Nociception
Equilibrioception
seeing
hearing
words per minute
refreshable Braille display
input devices
afford
Keyboard
Pointing device
Touchscreen
Computer vision
Speech recognition
Motion
Orientation
smartphones
affordance
accessibility
Multimodal interaction
Multimodal learning

Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.