PhD calls 2021



New Phd positions (with scholarship) are available in the Humanoid Sensing and Perception group in the iCub Facility, Istituto Italiano di Tecnologia.

The positions are available through the PhD course of Bioengineering and Robotics, curriculum on Advanced and Humanoid Robotics.

Themes titles (numbering refers to the complete list of themes offered by the Advanced and Humanoid Robotics curriculum in the online call):
  • Visuo-haptic integration for object manipulation and perception (#25)
  • Learning body-schema for self-perception and tool use (#26)
  • Distributed AI in sensor networks and robotics platforms (#27)
  • Data-efficient object detection and segmentation learning for robotics (#28)


Applications must be submitted online before June 15, 2021 at 12 PM, CET.

Detailed instructions for applications are reported here.

Prospective candidates are invited to get in touch with Lorenzo Natale (name.surname@iit.it) for further details.

Detailed description of the themes is reported below.


Visuo-haptic integration for object manipulation and perception


Description: Object manipulation is a fundamental capability for robots and as such has been extensively studied in robotics. In recent research several data driven techniques based on deep learning have been proposed demonstrating remarkable performance especially in pick-and-place scenarios with robotic grippers (e.g. [1]). The majority of such approaches, however, rely on visual feedback alone to estimate and evaluate grasping candidates, and propose open-loop strategies that do not allow corrective actions to be performed after the initial grasp pose is evaluated. In humans, on the other hand, grasping and object manipulation are largely influenced by tactile and haptic feedback that originates during the interaction between the fingers and the object.

In this project we seek to explore how tactile feedback can be used to complement vision during object manipulation. We will start from the state-of-the-art in robot grasping to provide the robot with an initial estimate of the object shape and grasping pose candidates, and implement active perception strategies that allow the robot to interact with the objects both to i) refine its knowledge on the object position, pose and shape and ii) simplify the task by pushing or repositioning objects that are touching. We will consider an initial scenario in which objects are presented in isolation and move to a more challenging, cluttered scenario that involve groups of objects on a heap. For this work we will use the iCub humanoid robot and the Panda Arm from Franka Emika and the tactile sensors described in [2-3].

Requirements: The ideal candidate would have a degree in Computer Science, Engineering or related disciplines, with a background in Computer Vision and Machine Learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

References:
[1] Mousavian, A., Eppner, C., Fox, D., 6-DOF GraspNet: Variational Grasp Generation for Object Manipulation, ICCV 2019.

[2] Jamali, N., Maggiali, M., Giovannini, F., Metta, G., and Natale, L., A New Design of a Fingertip for the iCub Hand, in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 2015, pp. 1799-1805.

[3] Holgado, A. C., Piga, N., Pradhono Tomo, T., Vezzani, G., Schmitz, A., Natale, L., and Sugano, S., Magnetic 3-axis Soft and Sensitive Fingertip Sensors Integration for the iCub Humanoid Robot, in Proc. IEEE-RAS International Conference on Humanoid Robotics, Toronto, Canada, 2019, pp. 1-8.

Contacts: Lorenzo Natale (name.surname@iit.it)


Learning body-schema for self-perception and tool use


Description: As humans we are constantly aware of how our body is positioned in space. The representation that the brain has of the body, individual body-parts and their reciprocal position in space is often referred to ad body-schema [1]: it integrates information from different sensory systems, in particular vision, touch and proprioception. Such representation is constantly updated, to maintain it efficient despite changes to our body that happen for example during development. The body schema is important for the control of action and to develop the sense of agency. An interesting property of the body-schema is that it can naturally extend to incorporate tools, which is probably responsible for our ability to seamlessly interact with the environment when using tools.

A large amount of research in robotics has been devoted to the problem of learning the robot kinematics or dynamics. With some exceptions, much less attention has been devoted to the problem of learning the visual appearance of the robot [2-3], and to study how to develop methods that allow robots to autonomously acquire such representation, keep it updated during operation and quickly extend to incorporate tools.

This project seeks to explore the development of a body-schema in a humanoid robot. We will study how to develop a representation of the robot body that incorporates visual, tactile and proprioceptive information from the motor encoders. Importantly, we will develop methods for the robot to autonomously acquire and update such representation. Finally, we will demonstrate the importance of such a representation for the control of action and tool incorporation

Requirements: The ideal candidate would have a degree in Computer Science, Engineering or related disciplines, with a background in Computer Vision and Machine Learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

References:
[1] Schillaci, G., Hafner, V., Lara B., Exploration Behaviors, Body Representations, and Simulation Processes for the Development of Cognition in Artificial Agents, Front. Robot. AI, 30 June 2016.

[2] Ciliberto, C., Smeraldi, F., Natale, L., Metta, G., Online Multiple Instance Learning Applied to Hand Detection in a Humanoid Robot, IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, California, September 25-30, 2011, pp. 1526-1532.

[3] Yang, B., Jayaraman, D., Berseth, G., Efros, A., Levine, S., Morphology-Agnostic Visual Robotic Control, IEEE Robotics and Automation Letters, 5(2), 2020.

Contacts: Lorenzo Natale (name.surname@iit.it)


Distributed AI in sensor networks and robotics platforms


Description: The integration of smart building technology and robotics has great potentials. Smart buildings equipped with robots have a much high level of autonomy, because they can physically interact with the environment and humans. In addition they can actively inspect the scene to get additional information if needed. Robots, on the other hand, can have access to a larger set of sensors and computing power, than what is available on-board. In this setting robots can monitor the environment from a different perspective, and adapt their behaviour depending on the situation. This research theme will develop AI approaches for egocentric and allocentric scene understanding by leveraging information form robotic platforms (egocentric) and camera networks (allocentric) deployed in indoor environments. Topics of research are active self-localization, dynamic scene analysis, and attention mechanisms using egocentric and allocentric data. This research will be implemented on the hardware already available at IIT which include a distributed sensor network with 30 cameras and an R1 robot. The target will be to deploy AI assistive systems that can interact with and support humans in several high-level tasks.

Requirements: The ideal candidate would have a degree in Computer Science, Engineering or related disciplines, with a background in Computer Vision and Machine Learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

Contacts: Lorenzo Natale and Alessio DelBue (name.surname@iit.it)


Data-efficient object detection and segmentation learning for robotics


Description: Reliable perception and fast adaptation to new conditions are priority skills for robots that operate in ever-changing environments. State-of-the-art Deep Learning based solutions have achieved accurate results on core Computer Vision tasks. However, they typically require to be trained on large, carefully annotated datasets. This hampers their adoption in applied domains, like robotics, since (i) publicly available, general purpose datasets cannot be used to train application specific vision systems and (ii) manually annotating a sufficient set of images is not a viable choice for those systems that require online adaptation. On the other hand, robots can actively explore the environment and are provided with multiple sensors, especially RGB-D cameras, that can collect plenty of (unlabeled) data.

In this project, we seek to study a weakly-supervised learning framework [1] and its application to robotics. Specifically, we will consider core tasks of visual object detection and segmentation, starting from the results in [2]. We will draw from standard solutions in the computer vision literature, based on Active Learning and Semi-supervised Learning [1] and we will develop ad hoc algorithms for the considered robotic scenario. We will consider a robot actively exploring the environment, e.g. objects on a table-top or inside a room. Then, at a later stage of the project, we will increase the interactive capabilities of the robot, pushing, pulling and grasping the objects of interest to acquire different and diverse views. Moreover, data augmentation and synthetic data generation techniques [3] will be considered to enhance efficiency. For this work, we will use the two humanoid platforms, iCub and R1 and the Panda Arm from Franka Emika.

This work will be carried out and validated on the R1 humanoid robotic platform.

Requirements: The ideal candidate would have a degree in Computer Science, Engineering or related disciplines, with a background in Computer Vision and Machine Learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

References:
[1] Zhou, Zhi-Hua. "A brief introduction to weakly supervised learning." National science review 5.1 (2018)

[2] Elisa Maiettini, Giulia Pasquale, Vadim Tikhanoff, Lorenzo Rosasco, Lorenzo Natale, IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids) Toronto, Canada (2019)

[3] Xie, C., Xiang, Y., Mousavian, A., & Fox, D. Unseen object instance segmentation for robotic environments. IEEE Transactions on Robotics. (2021)

Contacts: Lorenzo Natale and Elisa Maiettini (name.surname@iit.it)