Viewing Autonomic Computing through the Lens of Embodied Artificial Intelligence: A Self-DebateKeynote
A topic of central interest to the autonomic computing community is how to manage computing or other self-adaptive systems in the face of changing circumstances and events. In years past, I had advocated an approach in which high-level goals are expressed in the form of utility functions, and optimization and/or feedback control techniques are used in conjunction with system models to adjust resources and tuning parameters to maximize utility. After outlining and illustrating the general concepts behind this idea, I will attack it as being fundamentally flawed. Then, I will introduce my recent work on embodied Artificial Intelligence agents, which are somewhat like Alexa, Siri, and other voice-driven assistants with which we are familiar in the consumer space, with two major differences. First, they are designed to operate in the business realm, where they assist humans with data analysis and decision making. Second, they interact with people multi-modally, using speech in conjunction with non-verbal modalities like pointing and facial expression. My self-debate will then resolve into a happy union of the two ideas, in which I will illustrate how I believe embodied AI agents can solve the fundamental flaw of utility-based autonomic computing. I will conclude by revealing that, at the right level of abstraction, the two fields have much in common, including their software architecture, and much to gain from one another.
Tue 18 MayDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
16:00 - 17:00 | Keynote 1SEAMS 2021 at SEAMS Room Chair(s): Ingrid Nunes Universidade Federal do Rio Grande do Sul (UFRGS), Brazil | ||
16:00 60mKeynote | Viewing Autonomic Computing through the Lens of Embodied Artificial Intelligence: A Self-DebateKeynote SEAMS 2021 Media Attached |
Go directly to this room on Clowdr