PhD calls 2016


New Phd positions (with scholarship) are available in the Humanoid Sensing and Perception group in the iCub Facility, Istituto Italiano di Tecnologia.

The positions are available through the PhD course of Bioengineering and Robotics, curriculum on Advanced and Humanoid Robotics. Prospective candidates are invited to get in touch with Lorenzo Natale (name.surname@iit.it) for further details.

The official call can be found online: https://www.iit.it/phd-school/

Pay particular attention to the tips and tricks section which contains detailed instructions on how to apply. Applications must be filed through the University of Genova using the online service at this link.

Deadline for application: June 10, 2016 at 12.00 noon (Italian time).

Sensing humans: enhancing social abilities of the iCub platform


Description: there is general consensus that robots in the future will work in close interaction with humans. This requires that robots are endowed with the ability to detect humans and interact with them. However, treating humans as simple animated entities is not enough: meaningful human-robot interaction entails the ability to interpret social cues. The aim of this project is to endow the iCub with a fundamental layer of capabilities for detecting humans, their posture and social behaviour. Examples could be the ability to detect if a person is attempting to interact with the robot and to react accordingly. This requires a new set of computational tools based on Computer Vision and Machine Learning to detect people at close distance. This "face-to-face" scenario requires developing novel algorithms for coping with situations in which large areas of the body are occluded or only partially visible.

Requirements:This PhD project will be carried out within the Humanoid Sensing and Perception lab (iCub Facility) and Visual Geometry and Modelling Lab (PAVIS department). The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Computer Vision and/or Machine Learning. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

Contacts: Lorenzo Natale and Alessio Del Bue (name.surname@iit.it)


Scene analysis using deep-learning


Description: machine learning, and in particular deep learning methods, have been applied with remarkable success to solve visual problems like pedestrian detection, object retrieval, recognition and segmentation. One of the difficulties with these techniques is that training requires a large amount of data and it is not straightforward to adopt them when training samples are acquired online and autonomously by a robot. One solution is to adopt pre-trained convolutional neural networks (DCNN) for image representation and use simpler classifiers, either in batch or incrementally. Following this approach DCNNs have been integrated in the iCub visual system leading to a remarkable increase of object recognition performance. However, scene analysis in realistic settings is still challenging due to scale, light variability and clutter. The goal of this project is to further investigate and improve the iCub recognition and visual segmentation capabilities. To this aim we will investigate techniques for pixel-base semantic segmentation using DCNNs and object detection mixing top-down and bottom-up cues for image segmentation.

Requirements: This PhD project will be carried out within the Humanoid Sensing and Perception lab (iCub Facility) and Laboratory for Computational and Statistical Learning. The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Machine Learning, Robotics and possibly in Computer Vision. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei, ImageNet Large Scale Visual Recognition Challenge, arXiv:1409.0575, 2014

Jon Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR 2015

Pasquale, G., Ciliberto, C., Odone, F., Rosasco, L., and Natale, L., Teaching iCub to recognize objects using deep Convolutional Neural Networks, in Proc. 4th Workshop on Machine Learning for Interactive Systems, 2015

Contacts: Lorenzo Natale and Lorenzo Rosasco (name.surname@iit.it)


Implicit learning


Description:machine learning, and in particular deep learning methods, have been applied with remarkable success to solve visual problems like pedestrian detection, object retrieval, recognition and segmentation. One of the difficulties with these techniques it that training requires a large amount of labelled data and it is not straightforward to adopt them when training samples are acquired online and autonomously by the robot. Critical issues are how to obtain large amount of training samples, how to perform object segmentation and labelling. They key idea is develop weakly supervised frameworks, where learning can exploit forms of implicit labelling. In previous work we have proposed to exploit coherence between perceived motion and the robot own-motion to autonomously learn a visual detector of the hand. The goal of this project is to investigate algorithms for learning to recognize object by exploiting implicit supervision, focusing in particular on the strategies that allow the robot to extract training samples autonomously, starting from motion and disparity cues.

Requirements: This PhD project will be carried out within the Humanoid Sensing and Perception lab (iCub Facility) and Laboratory for Computational and Statistical Learning. The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Machine Learning, Robotics and possibly in Computer Vision. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei, ImageNet Large Scale Visual Recognition Challenge, arXiv:1409.0575, 2014.

Wang, X., Gupta, A., Unsupervised Learning of Visual Representations using Videos, arXiv:1505.00687v2, 2015.

Ciliberto, C., Smeraldi, F., Natale, L., Metta, G., Online Multiple Instance Learning Applied to Hand Detection in a Humanoid Robot, IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, California, September 25-30, 2011.

Contacts: Lorenzo Natale and Lorenzo Rosasco (name.surname@iit.it)


Learning to recognize objects using multimodal cues


Description:robots can actively sense the environment using not only vision but also haptic information. One of the problems to be addressed in this case is how to control the robot to explore the environment and extract relevant information (so called exploratory procedures). Conventionally, learning to recognize objects has been primarily addressed using vision. However, physical properties of objects are more directly perceived using other sensory modalities. For this reason, recent work has started to investigate how to discriminate objects using other sensory channels, like touch, force and proprioception. The goals of this project are i) to implement control strategies for object exploration, investigating to what extent different explorative strategies contribute to object discrimination, ii) the implementation of learning algorithms that allow the robot to discriminate objects using haptic information and, finally, iii) to investigate how haptic information can be integrated with vision to build a rich model of the objects for better discrimination.
Requirements:this PhD project will be carried out within the Humanoid Sensing and Perception lab (iCub Facility) and Laboratory for Computational and Statistical Learning. The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Machine Learning, Robotics and possibly in Computer Vision. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:

Pasquale, G., Ciliberto, C., Odone, F., Rosasco, L., and Natale, L., Teaching iCub to recognize objects using deep Convolutional Neural Networks, in Proc. 4th Workshop on Machine Learning for Interactive Systems, 2015.

Liarokapis, M.V., Çalli, B., Spiers, A.J, Dollar, A.M., Unplanned, model-free, single grasp object classification with underactuated hands and force sensors, IROS, 2015.

Madry, M., Bo, L., Kragic, D. and Fox, D., ST-HMP: Unsupervised Spatio-Temporal feature learning for tactile data, ICRA 2014.

Contacts:Lorenzo Natale and Lorenzo Rosasco (name.surname@iit.it)


Model driven software development in robotics


Description: humanoid robots are evolving at rapid pace, thanks to impressive progress in mechatronics and algorithms supporting cognitive capabilities for perception, control and planning. Proper integration of such capabilities require not only an adequate software infrastructure but also adoption of sound software engineering methodologies. Research on software engineering for robotics has primarily focused component based approaches and middleware technologies (ROS, YARP, OROCOS to mention just a few). Model driven engineering is widely adopted to design complex systems in other fields but has received comparatively little attention in robotics. However, adoption of model driven approaches to the development of software systems lead to increase quality and code reuse. The goal of this project is to survey existing techniques for modeling distributed component-based robotic software systems and to develop a model-driven engineering toolkit for the iCub system. The new toolkit should support the system engineer in designing, configuring, and analyzing relevant properties of control applications for the iCub system.

Requirements: This PhD project will be carried out within the Humanoid Sensing and Perception lab (iCub Facility), in collaboration with the Robotics Laboratory of the University of Bergamo. The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) with a background in Software Engineering and, possibly, Robotics. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:

Brugali, D. Model-driven Software Engineering in Robotics, IEEE Robotics and Automation Magazine, 22(3): 155-166, 2015.

Christian Schlegel, Andreas Steck, Alex Lotz. Robotic Software Systems: From Code-Driven to Model-Driven Software Development. In Ashish Dutta, editor, Robotic Systems - Applications, Control and Programming. Pages 473-502. InTech, ISBN 978-953-307-941-7, 2012.

Fitzpatrick, P., Metta, G., and Natale, L., Towards Long-Lived Robot Genes, Robotics and Autonomous Systems, Volume 56, Issue 1, pp. 29-45, Elsevier 2008.

Contacts: Lorenzo Natale (name.surname@iit.it) and Davide Brugali (name.surname@unibg.it)