PhD calls 2017



New Phd positions (with scholarship) are available in the Humanoid Sensing and Perception group in the iCub Facility, Istituto Italiano di Tecnologia.

The positions are available through the PhD course of Bioengineering and Robotics, curriculum on Advanced and Humanoid Robotics. Prospective candidates are invited to get in touch with Lorenzo Natale (name.surname@iit.it) for further details.

The official call can be found online: https://www.iit.it/phd-school/ ">https://www.iit.it/phd-school/

Pay particular attention to the tips and tricks section which contains detailed instructions on how to apply. Applications must be filed through the University of Genova using the online service at this link.

Themes titles (see below for details):
  • Multimodal perception of objects
  • Multimodal object exploration and grasping
  • Sensing humans: enhancing social abilities of the iCub platform
  • Autonomous learning of objects using multimodal, event driven cues


Deadline for application: June 13, 2017 at 12.00 noon (Italian time).


Multimodal perception of objects


Description: Conventionally, robots rely on vision to perceive and identify objects. Although computer vision has recently made remarkable progress, touch and, in general, haptic information can still provide complementary information. This is because some material and object properties are simply not accessible, difficult to be estimated from vision (like the object weight, roughness), or even hidden by occlusions. During active object manipulation multi-modal information is available to the robot and can be used for learning. Yet, during recognition only partial information may be available (typically vision). This project will investigate how to integrate visual and haptic information for object discrimination. We will initial consider the case in which multi-modal information is available during training and recognition. In a second stage of the project we will investigate how learning with multi-modal cues can help object discrimination when only partial information is available (vision or touch). This project will be carried out on the iCub robot. The robot sensory system includes cameras for vision, tactile sensors, position sensors and force/torque sensors. In the initial part of the project we will build a dataset for the experiments, by acquiring multi-modal data while the robot grasps a set of objects in various ways. We will then investigate methods for feature extraction (in particular, for vision, we will consider features from Deep Convolutional Neural Networks) and machine learning methods (e.g. multi-view learning, learning with privileged information) for object discrimination using individual and combined features.

Requirements: the ideal candidate would have a degree in Computer Science, Engineering or related disciplines; a background in control theory and machine learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

References:
Higy, B., Ciliberto, C., Rosasco, L., and Natale, L., Combining Sensory Modalities and Exploratory Procedures to Improve Haptic Object Recognition in Robotics, in IEEE-RAS International Conference on Humanoid Robots, Cancun, Mexico, 2016

Pasquale, G., Ciliberto, C., Rosasco, L., and Natale, L., Object Identification from Few Examples by Improving the Invariance of a Deep Convolutional Neural Network, in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, Korea, 2016, pp. 4904-4911

Contacts: Lorenzo Natale and Lorenzo Rosasco (name.surname@iit.it)


Multimodal object exploration and grasping


Description: Object shape is fundamental for planning grasping. Precise models of objects may not be available due to lack of object models, noise in the sensory system or simply occlusions. Object models can be however inferred by actively explore objects and extracting information from the sensory system. In this project we will investigate techniques for object modelling and object exploration using multi-modal cues. The main idea is that vision can provide initial guesses to guide the exploration and that this guess can be consequently refined using tactile information. This project will develop novel techniques for automatic object modelling using multi-modal sensory information. The goal of this project is to advance the state-of-the-art in the field of automatic object modelling and, consequently, grasping and manipulation of unknown objects. This project will be carried out on the iCub robot using the stereo system on the robot and the tactile sensors in the hand. We will use depth information about the objects to build an initial estimation of the shape of the object (e.g. using superquadrics), we will then implement an active strategy that based on this initial guess will provide the robot with a sequence of points to be explored with the tactile system. This information will be then used to improve the model of the object and provide an accurate shape. Validation will be carried out on grasping tasks.

Requirements: the ideal candidate would have a degree in Computer Science, Engineering or related disciplines; a background in control theory and machine learning. He would also be highly motivated to work on robotic platform and have computer programming skills.

References:
Jamali, N., Ciliberto, C., Rosasco, L., and Natale, L., Active Perception: Building Objects' Models Using Tactile Exploration, in IEEE-RAS International Conference on Humanoid Robots, Cancun, Mexico, 2016

Vezzani, G., Pattacini, G., Natale, L. A Grasping Approach Based on Superquadric Models, in IEEE International Conference on Robotics and Automation, 2017

Björkman, M., Bekiroglu, Y., Högman, V., and Kragic, D., Enhancing visual perception of shape through tactile glances, in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013, pp. 3180–3186.

Contacts: Lorenzo Natale and Ugo Pattacini (name.surname@iit.it)


Scene analysis using deep-learning


Description: machine learning, and in particular deep learning methods, have been applied with remarkable success to solve visual problems like pedestrian detection, object retrieval, recognition and segmentation. One of the difficulties with these techniques is that training requires a large amount of data and it is not straightforward to adopt them when training samples are acquired online and autonomously by a robot. One solution is to adopt pre-trained convolutional neural networks (DCNN) for image representation and use simpler classifiers, either in batch or incrementally. Following this approach DCNNs have been integrated in the iCub visual system leading to a remarkable increase of object recognition performance. However, scene analysis in realistic settings is still challenging due to scale, light variability and clutter. The goal of this project is to further investigate and improve the iCub recognition and visual segmentation capabilities. To this aim we will investigate techniques for pixel-base semantic segmentation using DCNNs and object detection mixing top-down and bottom-up cues for image segmentation.

Requirements: This PhD project will be carried out within the Humanoid Sensing and Perception lab (iCub Facility) and Laboratory for Computational and Statistical Learning. The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Machine Learning, Robotics and possibly in Computer Vision. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei, ImageNet Large Scale Visual Recognition Challenge, arXiv:1409.0575, 2014

Jon Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR 2015

Pasquale, G., Ciliberto, C., Odone, F., Rosasco, L., and Natale, L., Teaching iCub to recognize objects using deep Convolutional Neural Networks, in Proc. 4th Workshop on Machine Learning for Interactive Systems, 2015

Contacts: Lorenzo Natale and Lorenzo Rosasco (name.surname@iit.it)


Sensing humans: enhancing social abilities of the iCub platform


Description: there is general consensus that robots in the future will work in close interaction with humans. This requires that robots are endowed with the ability to detect humans and interact with them. However, treating humans as simple animated entities is not enough: meaningful human-robot interaction entails the ability to interpret social cues. The aim of this project is to endow the iCub with a fundamental layer of capabilities for detecting humans, their posture and social behaviour. Examples could be the ability to detect if a person is attempting to interact with the robot and to react accordingly. This requires a new set of computational tools based on Computer Vision and Machine Learning to detect people at close distance. This "face-to-face" scenario requires developing novel algorithms for coping with situations in which large areas of the body are occluded or only partially visible.

Requirements: This PhD project will be carried out within the Humanoid Sensing and Perception lab (iCub Facility) and Visual Geometry and Modelling Lab (PAVIS department). The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Computer Vision and/or Machine Learning. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

Contacts: Lorenzo Natale and Alessio Del Bue (name.surname@iit.it)


Autonomous learning of objects using multimodal, event driven cues


Description:To effectively interact with the environment and adapt to different contexts and goals, robots need to be able to autonomously explore and learn about objects. To this aim, we need machine learning strategies that allow to plan exploratory actions and take advantage of information from multiple sensory modalities. In particular, we will consider learning from haptic (touch, force and proprioception), auditory and visual cues (extracted from event based as well as conventional frame based cameras), obtained during exploratory actions, to investigate how different features contribute to object discrimination.
In the first part of the project we will implement behaviours that allow the robot to interact with objects through manipulation using predefined explorative procedures (like touching, power grasp, squeezing and contour following). Events in any sensory channel will trigger acquisition of features from all the available sensors. In the second part of the project, these features will be used to train machine learning algorithms for object recognition. The objective of this project is also to investigate to what extent features from different sensory channels contribute to object discrimination. The outcome of this project will be a set of explorative procedures allowing a humanoid robot to interact with objects and extract features from multiple sensory channels and signal processing algorithms for detecting relevant events during object exploration that trigger feature extraction and identifying a set of features that allow to discriminate objects using multiple sensory modalities.

Requirements: degree in Computer Science or Engineering (or equivalent) and background in Computer Vision and/or Machine Learning. High motivation to work on a robotic platform and good programming skills.

References:
Benosman, R.; Clercq, C.; Lagorce, X.; Sio-Hoi Ieng; Bartolozzi, C., "Event-Based Visual Flow," Neural Networks and Learning Systems, IEEE Transactions on, vol.25, no.2, pp.407,417, Feb. 2014, doi: 10.1109/TNNLS.2013.2273537


Ciliberto, C., Smeraldi, F., Natale, L., Metta, G., Online Multiple Instance Learning Applied to Hand Detection in a Humanoid Robot, IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, California, September 25-30, 2011.

Ciliberto, C., Fanello, S.R., Santoro, M., Natale, L., Metta, G. and Rosasco, L. "On the impact of learning hierarchical representations for visual recognition in robotics." In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, pp. 3759-3764.

Contacts: Lorenzo Natale and Chiara Bartolozzi (name.surname@iit.it)