Crowdsourcing has been proved to be one of the key technologies that can change the design of our society. In the near future, we may have more than AI “workers” than human workers, working with them to solve a variety of problems. JST CREST CyborgCrowd project started in 2016 to extend crowdsourcing technologies to deal with a hybrid workforce consisting of human and AI-powered machine workers. The project has two goals: (1) to extend the traditional workforce to include a diverse set of workers, including AI workers and people who were not considered as a part of the workforce in the traditional organizations, and (2) to automate the task assignment process and dynamically optimize the assignment considering not only employers’ but also workers’ benefits. This talk will present our modeling and optimization techniques that were developed to leverage the power of the diverse set of workers in a flexible way.
Atsuyuki Morishima is a professor of University of Tsukuba, Japan. His research interests include computational division of labor, data-centric human-machine computations, data integration and digital libraries. Currently, he is an associate editor of the VLDB Journal, and a steering committee member of ICADL, DASFAA, and ACM/IEEE JCDL. He is the leader of JST CREST CyborgCrowd Project and the representative of the Crowd4U initiative. He has been involved in many real-world crowdsourcing projects on digital libraries, digital archives, natural disaster responses and smart cities. He received international and domestic awards including the second-runner-up for the ACM SIGMOD 2001 best paper award, the 2004 IPSJ best paper award, the CAiSE 2015 best paper award and the 2018 Emerald Literati Award. In the past, he served as the chair of SIG-DBS of the Information Processing Society of Japan, and a steering committee member of ACM SIGMOD Japan Chapter. He served as a track chair in ACM CIKM 2008 and program co-chairs in ICADL 2016 and JCDL 2019.
Scaling the Development and Automating the Validation of Autonomous Systems towards Safe Deployment in Real World
Autonomy requires the prime system functionality to run flawlessly and yet its real-world deployment demands safety mechanisms and mitigation strategies to run in an equally perfect fashion.
Hence, there is a substantial and increasingly complex engineering effort between demonstrating a few features of an autonomous systems and deploying them at scale and safely in real world.
This reasoning implicates the need to invent new automated solutions for continuously improving development, verification, and validation processes for deployable autonomous systems.
In this talk, I discuss the use of replay, simulation, and scalable computing infrastructure for building safe autonomous vehicles based on research within OpenSCENARIO 2.0 community and collaboration with car manufacturers.
I also present a set of technical solutions for discovering the unknown unknowns while developing autonomous vehicles. These solutions, when embedded in proper computing infrastructure workflows, constitute a scalable end-to-end validation methodology for domain-specific autonomy. The goal is to enable commercial autonomous vehicles deployment on the road according to the strictest safety standards applicable around the globe.
Doctor of Engineering Sciences in CS and EE NVIDIA Global Head of Verification and Validation
Dr. Justyna Zander works on scaling and automating autonomous vehicles simulation and validation efforts at NVIDIA to enable a safe fleet deployment. Before she was with Intel, MathWorks, White House, German car industry, and Fraunhofer Institutes. She earned two BSc-s, MSc, and PhD in Germany, and did a postdoc at Harvard University.
Dr. Zander holds 3 citizenships (Polish, German, and American) and speaks 5 languages. Filed for ~30 patents, coauthored ~40 publications and 3 books, cited over ~2000 times, reviewed ~500 academic papers on model-based testing and embedded systems design.
Acknowledged internationally with countless awards (IEEE, European Union, NIST, SWE, SAE, Falling Walls, etc.). For over 20 years she has served as a technical committee member for more than 50 journals and conferences. Advises government strategy and research roadmaps; invited by NSF, EU Commission, national councils; member of multiple Steering Committees at various conferences.
She was listed on Business Insider’s annual list of the Most Powerful Women Engineers of 2018 and in 2017 she won SWE Leader Award as an international recognition for breakthrough work in computer science and engineering.
Software can do wrong, as prominently witnessed by Cambridge Analytica and defeat devices in the automotive industries. But what is wrong, and who is responsible? As software continues to permeate our daily lives, this question, also at the core of the ongoing debate about regulation of AI in Europe, becomes increasingly relevant – for engineers, for companies, for educators, for regulators, and for society as a whole. One prominent approach today is the formulation of codes of conduct. Unfortunately, a common perception is that these catalogs of values specifically fail to provide useful guidance to engineers. This is because the respective values often are in conflict with each other (for instance, privacy vs. transparency), and because software and software engineering are fundamentally context-specific, which makes the existence of a universally applicable set of values very unlikely.
As a consequence, ethical considerations need to be embedded into software development activities in a project-specific manner, and not just for AI-based systems. As agile development methodologies fundamentally rest on the concepts of short-term planning, empowerment, incrementality, and learning, they turn out to be particularly well suited for embedding ethical deliberations into the development process. In this talk, we will first argue why this is the case. In a second step, we present our schema for ethical deliberation in agile processes, the result of a long-standing cooperation between software engineers and (business) ethicists at the Bavarian Research Institute for Digital Transformation https://www.bidt.digital/. We will close by discussing the applicability in industry and education.
Alexander Pretschner is a full professor of software and systems engineering at TUM; the founding director of bidt, the interdisciplinary Bavarian research institute for digital transformation; and a scientific director of fortiss, Bavaria’s research and transfer institute for software-intensive systems. Research interests include all aspects of software engineering, currently focusing on testing, security, and accountability. Prior appointments include those of a full professor at Karlsruhe Institute of Technology, of a group manager at Fraunhofer IESE and an adjunct associate professor in Kaiserslautern, and of a senior researcher at ETH Zurich. PhD from TUM, MS degrees in CS from RWTH Aachen University and the University of Kansas.
Wed 13 Oct
23:59 - 01:30
|Scaling the Development and Automating the Validation of Autonomous Systems towards Safe Deployment in Real World|
Thu 14 Oct
10:00 - 11:30
|Computational Division of Labor: Imagine All the People and AI in the Crowd Working Happily|
Fri 15 Oct
18:00 - 19:30
|Software can do Wrong: On Ethics in Agile Software Engineering|