PhD calls 2018



New Phd positions (with scholarship) are available in the Humanoid Sensing and Perception group in the iCub Facility, Istituto Italiano di Tecnologia.

The positions are available through the PhD course of Bioengineering and Robotics, curriculum on Advanced and Humanoid Robotics. Prospective candidates are invited to get in touch with Lorenzo Natale (name.surname@iit.it) for further details.

The official call can be found online: https://www.iit.it/phd-school/phd-school-genoal/

Pay particular attention to the ADMISSION GUIDE which contains detailed instructions and important suggestions on how to apply, including the recommended template for the research project. Applications must be filed through the University of Genova using the online service at this link.

Themes titles (see below for details):
  • Multimodal Object Exploration and Grasping
  • Perception and Machine Learning for Manipulation
  • Vision for Walking Robots


Deadline for application: June 12, 2018 at 12.00 noon (Italian time).


Multimodal Object Exploration and Grasping


Description: To plan successful grasping a robot must have accurate estimation of the object pose and shape. Precise information on the object pose may be unavailable due to noise in the sensory system or occlusions. When dealing with novel objects, this problem becomes more complex because the robot cannot rely on precise 3D models of the objects. For these reasons, grasping of unknown objects or whose pose is uncertain is still an open problem in robotics. This project aims at designing algorithms for object exploration, tracking and modelling by integrating visual and tactile information. The idea is to exploit vision to derive an initial estimation of the object pose and shape, and refine this estimation using the tactile information acquired while touching it. The main challenges are: i) implement control strategies to explore the object, ii) devise algorithms for fusing measures coming from the visual and tactile sensors and iii) develop techniques for modelling the object. This project will be carried out on the iCub robot using the stereo system on the robot and the tactile sensors in the hand. Validation will be carried out on grasping tasks

Requirements: The ideal candidate would have a degree in Computer Science, Engineering or related disciplines; a background in control theory, Bayesian filtering, and/or computer vision and machine learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

Experiments will be done on the R1 and iCub humanoid robots.

References:
Vezzani, G., Pattacini, U., Pasquale, G., and Natale, L., Improving Superquadric Modeling and Grasping with Prior on Object Shapes, in IEEE-RAS International Confenrence on Robotics and Automation, 2018.

Vezzani, G., Pattacini, U., Battistelli, G., Chisci, L., and Natale, L., Memory Unscented Particle Filter for 6-DOF Tactile Localization, in IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1139-3098, 2017.

Jamali, N., Ciliberto, C., Rosasco, L., and Natale, L., Active Perception: Building Objects' Models Using Tactile Exploration, in IEEE-RAS International Conference on Humanoid Robots, Cancun, Mexico, 2016.

Contacts: Lorenzo Natale and Ugo Pattacini (name.surname@iit.it)


Perception and Machine Learning for Manipulation


Description: Machine learning, and in particular deep learning methods, have been applied with remarkable success to solve visual problems like pedestrian detection, object retrieval, recognition and segmentation. In the robotic community, there has been growing interest in the application of machine learning and data driven approaches, to solve object manipulation and grasping tasks. Adopting data driven approaches in robotics is challenging. Acquiring training examples is expensive and requires several hours or days of experiments, and appropriate explorative actions. Training deep-learning models is typical off-line, and it does not allow robot to quickly adapt when faced with a novel situation.

This project falls squarely at the intersection between machine learning and robotics. The goal is to exploit machine learning to advance the capabilities of robots to interact with the environment, grasping and manipulating objects. The focus is on the study of strategies that allow learning to be autonomous, and incremental machine learning techniques that allow the robot to dynamically adapt to novel situations (i.e. novel objects, changes in the scene).

Possible topics include:
  • Perception of affordances, from object detection to detection of object parts;
  • Scene segmentation;
  • Robot self-perception for visual control of manipulation;
  • Data-driven approaches to object grasping.


This PhD project will be carried out within the Humanoid Sensing and Perception and Laboratory for Computational and Statistical Learning . Experiments will be done on the R1 and iCub humanoid robots.

Requirements: The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Machine Learning, Robotics and possibly in Computer Vision. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:
Maiettini, E., Pasquale, G., Rosasco, L., and Natale, L., Interactive Data Collection for Deep Learning Object Detectors on Humanoid Robots, in Proc. IEEE-RAS International Conference on Humanoid Robots, Birmingham, UK 2017.

Pasquale, G., Ciliberto, C., Odone, F., Rosasco, L., and Natale, L., Teaching iCub to recognize objects using deep Convolutional Neural Networks, in Proc. 4th Workshop on Machine Learning for Interactive Systems, 2015.

Camoriano, R., Pasquale, G., Ciliberto, C., Natale, L., Rosasco, L., and Metta, G., Incremental Robot Learning of New Objects with Fixed Update Time, in Proc. IEEE International Conference on Robotics and Automation, Singapore, 2017, pp. 3207-3214.

Contacts: Lorenzo Natale and Lorenzo Rosasco (name.surname@iit.it)


Vision for Walking Robots


Description: The ability to acquire high-resolution depth information allows for using three-dimensional geometries to build detailed shapes. The latter is fundamental for tasks that require complex interaction between the robot and the environment like balancing and walking, with and without hand support. Local shape information can in fact be used to segment the scene and to plan foot and hand placement to stabilize the robot. This is a challenging task because it requires observations on both the geometry and the visual appearance of the surrounding surfaces, in relation to the body of the robot. To perform such tasks, features need to be extracted from the data allowing different regions to be compared and matched. Depending on the complexity of the viewed scene, these features can be extracted from the depth data alone or need to be augmented with those extracted from images. The aim of this PhD is to study the general problem of scene understanding by combining three-dimensional depth observations with visual appearance from images. The goal is to leverage on deep-models to extract visual descriptors and perform classification. We consider locomotion tasks in scenarios that involve wholebody control.

This projet will be carried out in collaboration with the Humanoid Sensing and Perception and Dynamic Interaction and Control. Experiments will be done on the iCub humanoid robot.

Requirements: The ideal candidate would have a degree in Computer Science, Engineering or related disciplines, with a background in Computer Vision and Machine Learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

Contacts: Lorenzo Natale and Daniele Pucci (name.surname@iit.it)