ACM-SIGCHI supported Summer School on Intelligent User Interfaces for Cultural Heritage

May 18th - May 22nd, 2020, Haifa, Israel

Summer school program

Talks


Nicu Sebe - University of Trento

Title: Image and Video Generation: A deep Learning Approach

Abstract: Image and Video generation is a emerging topic in many research communities having a significant impact on many application domains. This presentation will cover several deep learning frameworks for image animation and video generation. Given an input image with a target object and a driving video sequence depicting a moving object, the approach is to generate a video in which the target object is animated according to the driving sequence.This is achieved through a deep architecture that decouples appearance and motion information. The framework consists of three main modules: (i) a Keypoint Detector unsupervisely trained to extract object keypoints, (ii) a Dense Motion prediction network for generating dense heatmaps from sparse keypoints, in order to better encode motion information and (iii) a Motion Transfer Network, which uses the motion heat maps and appearance information extracted from the input image to synthesize the output frames. We demonstrate the effectiveness of our method on several benchmark datasets, spanning a wide variety of object appearances, and show that our approach outperforms state-of-the-art image animation and video generation methods. Examples on animating art portraits will be provided.

Aggeliki Antoniou - University of Peloponnese, Greece

Title: Cultural Informatics: Possibilities and Challenges

Paul Mulholland - Open University, UK

Title: Interactive Narratives for Cultural Heritage

Abstract: Museums not only provide visitors with access to their collections but also use narrative techniques to offer interpretations of the museum artefacts and help the visitor to construct interpretations for themselves. Within the museum, narratives can be realised not only as written texts but also as physical structures that the visitor reads as they navigate the museum space. Technology, such as digital museum guides, can augment the physical narrative with additional resources that can be personalised according to visitor location and interests. However, care has to be taken to align the content and suggestions of the museum guide with the physical pull of museum space. Museum guides can use different modalities such as audio and augmented reality to better complement the physical museum experience. Technology can also be used to collect and share stories from visitors as well as present stories authored by the museum. Narrative museum technologies can be developed with different aims and evaluated in different ways. This can encompass educational measures (e.g. increased understanding of a historical domain), HCI measures (e.g. usability and aesthetics of the experience) as well broader effects of the experience, such as emotional response and feelings of empathy toward others. This talk will consider the nature of narrative, how narrative principles are applied in museums, ways in which technology can be used to enhance the narrative experience, effects the technology can have on the visitors, and how those effects can be evaluated.


Rita Cucchiara - University of Modena, Italy

Title: Visual and text embedding for understanding cultural heritage .

Oliviero Stock - FBK/irst, Italy

Title: IUI 4 CH & CH 4 IUI .

Tsvi Kuflik - University of Haifa

Title: The museum as a living lab

Alan Wecker - University of Haifa

Title: Personality in Cultural Heritage User Experiences

Rossana Damiano - University of Turin, Italy

Title: Connecting Cultural heritage representation to exploration: form Linked Data to smart environments

Sorin Hermon - The Cypress institute, Cypress

Title: Assuring reliability, accuracy and FAIR -ness of Cultural Heritage data for intelligent user interfaces in smart environments

Fabio Remondino - FBK, Italy

Title: 3D point cloud generation and classification: solutions and challenges

Abstract: The talk will present the actual solutions (hardware and algorithms) to derive dense point clouds, in particular for heritage scenarios. This include 3D imaging sensors, photogrammetric solutions, mobile mapping systems, handheld or static scanner, UAV/drones platforms, etc. The generated point clouds feature, normally, geometric and radiometric information, together with some information related to the acquisitions (e.g. normals, point quality, etc.). Such point clouds need to be enriched by semantic information in order to be better exploited and be more useful to non-experts. For this purpose, state-of-the-art machine/deep learning methods will be shown and discussed.

Eleonora Grilli - FBK, Italy

Title: Demo on 3D heritage classification with machine learning methods

Abstract: The demo will showcase how heritage point clouds can be semantically segmented (i.e. classified) using machine learning methods. Having clearly defined the needs and the necessary classes, specific prediction algorithms will be trained with a small portion of data in order to transfer the learned information to the rest of the dataset. Generalization to other unseen data will be also shown. Pros and cons with deep learning approaches will be also reported.

Massimo Zancanaro - University of Trento, Itali

Title: End-User Programming for Cultural Heritage appreciation: experiences and opportunities

Abstract: End-User Programming (or End-user Development EUD) is an approach to the design of digital artifacts that allows end-users—who are not primarily interested in software per se—to modify, extend and evolve those artifacts to better fit their needs. EUD aims at fostering appropriation by empowering users in taking control of digital artifacts. There have been several experiences of EUD aimed at involving curators and even visitors in personalizing digital artifacts for cultural heritage appreciation. In this talk, I will present some previous research and try to outline the road ahead.

Ilan Shimshoni - University of Haifa, Israel

Title: Solving archeological puzzles

Abstract: This talk focuses on the re-assembly of an archaeological artifact, given images of its fragments. This problem can be considered as a special challenging case of puzzle solving. The restricted case of re-assembly of a natural image from square pieces has been investigated extensively and was shown to be a difficult problem in its own right. Likewise, the case of matching ``clean'' 2D polygons/splines based solely on their geometric properties has been studied. But what if these ideal conditions do not hold? This is the problem addressed in the talk. Three unique characteristics of archaeological fragments make puzzle solving extremely difficult: (1) The fragments are of general shape; (2) They are abraded, especially at the boundaries (where the strongest cues for matching should exist); and (3) The domain of valid transformations between the pieces is continuous. The key contribution of this paper is a fully-automatic and general algorithm that addresses puzzle solving in this intriguing domain. We show that our approach manages to correctly reassemble dozens of broken artifacts and frescoes.

Joel Lanir - University of Haifa, Israel

Title: Context-aware computing at the museum environment

Abstract: Context aware systems and applications have become common in our daily lives. These programs and services react to the environment and adapt their behavior to anticipate users’ needs. In order to understand the user’s context, different sensors placed in the environment, the user’s devices or on the users themselves are used, creating a ubiquitous computing environment. The museum is a perfect place to explore context-awareness as visitors are mobile, wishing to accept information about their surroundings, and are willing to experiment with novel technologies. In this talk, I will outline several studies that we conducted in a smart museum environment that examine various context-aware issues.

Aaron Quigley - St. Andrews, Scotland

Title: Ubiquitous user interfaces

Abstract: UbiComp or Ubiquitous Computing is a model of computing in which computation is everywhere and computer functions are integrated into everything. It can be built into the basic objects, environments, and the activities of our everyday lives in such a way that no one will notice its presence (Weiser, 1999]. Such a model of computation will “weave itself into the fabric of our lives, until it is indistinguishable from it” (Weiser, 1999). Indeed, everyday objects will be places for sensing, input, processing along with user output (Greenfield, 2006). Within such a model of computing, we need to remind ourselves that its the user interface which represents the point of contact between a computer system and a human, both in terms of input to the system and output from the system.

There are many facets of UbiComp from low-level sensor technologies in the environment, through the collection, management, and processing of context data through to the middleware required to enable the dynamic composition of devices and services envisaged. These hardware, software, systems, and services act as the computational edifice around which we need to build our Ubiquitous User Interface (UUI). The ability to provide natural inputs and outputs from a system that allows it to remain in the periphery is hence the central challenge in UUI design and the focus of this talk.

Today displays and devices are all around us, on and around our body, fixed and mobile, bleeding into the very fabric of our day to day lives. Displays come in many forms such as smart watches, head-mounted displays or tablets and fixed, mobile, ambient and public displays. However, we know more about the displays connected to our devices than they know about us. Displays and the devices they are connected to are largely ignorant of the context in which they sit including knowing physiological, environmental and computational state. They don’t know about the physiological differences between people, the environments they are being used in, if they are being used by one or many.

This talk considers our display environments as an ecosystem and asks how we might model, measure, predict and adapt how people can use and create UUIs in a myriad of settings. With modeling we seek to represent the physiological differences between people and use the models to adapt and personalise designs, user interfaces. With measurement and prediction we seek to employ various computer vision and depth sensing techniques to better understand how displays are used. And with adaptation we aim to explore subtle techniques and means to support diverging input and output fidelities of display devices. Our ubicomp user interface is complex and constantly changing, and affords us an ever changing computational and contextual edifice. As part of this, the display elements need to be better understood as an adaptive display ecosystem rather than simply pixels if we are to realise a Ubiquitous User Interface.