VL/HCC 2023
Mon 2 - Fri 6 October 2023 Washington, DC, United States
You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 3 Oct

Displayed time zone: Eastern Time (US & Canada) change

09:00 - 12:30
09:00
3h30m
Tutorial
3D livecoding in the browser with Nodysseus
Tutorials

Graphical Modelling of Users’ Interpretations of Visual Displays

Instructors: Peter C-H. Cheng, Rachael Fernandez, University of Sussex; Mateja Jamnik, Daniel Raggi, University of Cambridge

Course duration: Afternoon, 3 October (Half day)

Course outline: The design of visual displays (including graphical user interfaces, information visualization, and visual languages in general) substantially impacts how easily users are able to understand and exploit task domain content. In cognitive terms, the process of understanding a display builds a user’s internal mental representation, an interpretation. The specific structure of the interpretation, in turn, determines what is salient content to the user, what inferences they are likely to draw, and will influence their organization of task sub-goals. Thus, it is valuable for researchers and designers of visual languages to anticipate the likely interpretive structures that users of displays will mentally construct. Computational cognitive models (e.g., ACT-R, SOAR) can be used to model users’ mental representations, but they are specialists, have steep learning curves, and are laborious to use.

This tutorial will provide participants with a theoretically sound and practical approach to the modeling of users’ interpretations of visual displays. The approach encompasses:

  • A theory of the interpretation of visual representations that is grounded in established ideas from cognitive science – Representation Interpretive Structure Theory, RIST [1].
  • A straightforward graphical language/notation for building models of interpretations that embeds and operationalizes RIST to enable the construction of theoretically consistent models – RISNotation, RISN [2].
  • An easy to master web browser editor with rich modelling tools for the ready construction of RISN models as networks of hierarchical graphical components – RIS-Editor, RISE [3].
  • A growing set of interpretative structures, idioms, which are intermediate level components commonly found across diverse notations and graphical displays [3].
  • A set of guidelines for analyzing visual displays and building users’ interpretations of them in RISN.

The tutorial is organized around graduated modelling exercises. Participants will learn how to and build models in RISN using RISE. A presentation of the underlying theory, RIST, will be given so participants understand the motivation and cognitive constraints of the graphical modelling notation, which will aid their analysis of interpretations. We will explore the role of interpretations in the analysis of the efficacy and design of visual languages. For example, how dramatically does a user’s interpretation of a graphic display vary with their experience of the target domain, their familiarity with the type of display, or even the specific task goal they bring to the display? What are common idioms that allow users to readily interpret new displays they’ve never previously seen? How can RISN models be used to assess the relative efficacy of alternative displays of the same domain content?

Participants: The tutorial will give researchers and designers a usable approach to examine the cognitive impact of visual displays. No prerequisite knowledge of cognitive science or specific analysis tools is required. Ideally, participants should bring their own laptop or tablet computer to fully benefit from the exercises. The tutorial team: The lead instructor is Peter Cheng, a Professor of Cognitive Science at the University of Sussex, UK. He heads the Representational Systems Lab, in the Department of Informatics. Other instructors include members of the rep2rep - Automating Representation Choice for AI Tools project, which is a collaboration with Professor Mateja Jamnik’s group in the Department of Computer Science and Technology at the University of Cambridge, UK.

https://www.sussex.ac.uk/research/labs/representational-systems-lab

https://sites.google.com/site/myrep2rep/

[1] Cheng, P. C.-H. (2020). A sketch of a theory and modelling notation for elucidating the structure of representations. In A. Pietarinen, P. Chapman, L. Bosveldde Smet, V. Giardino, J. Corter, & S. Linker (Eds.), Diagrammatic Representation and Inference. Diagrams 2020. Lecture Notes in Computer Science, vol 12169. Cham: Springer.

[2] Cheng, P. C.-H., Stockdill, A., Garcia Garcia, G., Raggi, D., & Jamnik, M. (2022). Representational Interpretive Structure: Theory and Notation. In V. Giardino, S. Linker, R. Burns, F. Bellucci, J.-M. Boucheix, & P. Viana (Eds.), Diagrammatic Representation and Inference (pp. 54-69). Cham: Springer International Publishing.

[3] Stockdill, A., Garcia Garcia, G., Cheng, P. C.-H., Raggi, D., & Jamnik, M. (2022). Cognitive modeling of interpretations of representations. In J. Macbeth, L. Gilpin, & M. T. Cox (Eds.), Proceedings of the Tenth Annual Conference on Advances in Cognitive Systems (pp. 36-56). Dayton, OH: Wright State University.

3D livecoding in the browser with Nodysseus

Instructor: Ulysses Popple, independent

Duration: morning, 3 October (half day)

Nodysseus is a visual language and editor for programming JavaScript targeting both traditional computers and mobile devices. This tutorial is an introduction to Nodysseus that covers adding 3D objects to a basic scene with three.js, creating reusable HTML tools to manipulate those objects, and collaborating with other participants through the tool. By the end of the tutorial, participants will feel comfortable iterating on programming ideas using their mobile devices and collaborating on ideas with other users.

In the first section of the workshop, we’ll make a scene using three.js. We’ll cover important basics like what a Nodysseus graph is, how the nodes function within the graph, and how a simple program is structured. Participants who finish quickly can experiment with using JavaScript to manipulate objects in the scene, or dive into other three.js nodes and functionality.

The second section will focus on creating small tools with HTML that can be used to manipulate the objects in the three.js scene. These include sliders for position, simple checkboxes to enable and disable functionality, and other small compositions of basic HTML elements.

For the remainder of the tutorial, participants will have the opportunity to collaborate with each other on mini-projects. Together, small groups can divide up tasks and work on graphs for other people in the team to use, or they can collaborate on a single graph in real time.

About the author: Ulysses Popple is an avid programmer across disciplines and stacks, but finds himself most at home in JavaScript and node-based programming languages. After freelancing for many years in interactive installations and apps (using a mix of C++, JavaScript, Unity, and TouchDesigner), he took a break from programming to learn visual effects with Houdini. Delving deep into the node-based tools which power the visual effects and video game industries inspired him to write his own. Today he works at Framestore Labs on their smart signage products which can be seen in a wide range of places including office buildings, malls, museums, and Times Square.

Call for Tutorials

The 2023 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) invites proposals for tutorials to be held in conjunction with the symposium. Tutorials allow conference attendees to expand their knowledge by introducing researchers to emerging areas or new technologies, or providing an overview of the state of the art in an existing research area. Tutorials should be on topics related to the conference, such as (but not limited to) end-user programming, visual programming, domain-specific languages, software visualization, and CS education.

Tutorial organizers must submit a proposal package and may either be accepted or rejected. The preferred format for tutorials is either a half day or full-day standalone session. However, we are also happy to consider alternative or experimental topics and formats. If the tutorial is accepted, then both the symposium organizers and the tutorial organizers will publicize the event to help ensure that it draws a sufficient number of attendees. Please note that an tutorial may be cancelled due to low registration if the number of participants (including the organizers) is less than fifteen.

Please submit proposals via email by June 30, 2023. Decision notifications will be sent by Friday, July 7, 2023 at the latest.

Instructions for Tutorial Proposals

Prospective tutorial instructors must submit a tutorial proposal package, which will be reviewed and may either be accepted or rejected. If the tutorial is accepted, then both the conference organizers and the tutorial instructors will publicize the tutorial to ensure that a sufficient number of attendees will choose to attend the tutorial. The tutorial package must contain:

  1. A course abstract of at most 500 words that lists
    • title
    • instructor(s) name and affiliation
    • course duration
    • a description of the benefits that attendees will receive from this course, the features of the course, and background on the instructor(s)
  2. Feel free to use bulleted lists in the abstract as needed. This abstract will be used to advertise the tutorial if it is accepted.
  3. A course description of 1–4 pages. This should contain
    • proposed duration of the tutorial (half day or full day, though shorter tutorials could also be proposed)
    • learning objectives
    • justification: Why will this tutorial be of interest to the VL/HCC community?
    • content: Describe in detail the material that will be covered.
    • presentation format and schedule: Describe in detail the format of the presentation and how it will be organized.
    • tutorial history: Describe the history of the tutorial, if any.

Questions? Use the VL/HCC Tutorials contact form.