PhD calls 2019



New Phd positions (with scholarship) are available in the Humanoid Sensing and Perception group in the iCub Facility, Istituto Italiano di Tecnologia.

The positions are available through the PhD course of Bioengineering and Robotics, curriculum on Advanced and Humanoid Robotics. Prospective candidates are invited to get in touch with Lorenzo Natale (name.surname@iit.it) for further details.

The official call can be found online: https://www.iit.it/phd-school/phd-school-genoal/

Pay particular attention to the ADMISSION GUIDE which contains detailed instructions and important suggestions on how to apply, including the recommended template for the research project. Applications must be filed through the University of Genova using the online service they provide (see ADMISSION GUIDE).

Themes titles (see below for details):
  • Perception and Machine Learning for Manipulation
  • Multimodal object perception using vision and touch
  • Automated Planning under Uncertainties for Autonomous Robots
  • Active Touch and Behaviour


Deadline for application: June 12, 2019.


Perception and Machine Learning for Manipulation


Description: Machine learning, and in particular deep learning methods, have been applied with remarkable success to solve visual problems like pedestrian detection, object retrieval, recognition and segmentation. In the robotic community, there has been growing interest in the application of machine learning and data driven approaches, to solve object manipulation and grasping tasks. Adopting data driven approaches in robotics is challenging. Acquiring training examples is expensive and requires several hours or days of experiments, and appropriate explorative actions. Training deep-learning models is typical off-line, and it does not allow the robot to quickly adapt when faced with a novel situation.

This project falls at the intersection between machine learning and robotics. The goal is to exploit machine learning to advance the capabilities of robots to interact with the environment, grasping and manipulating objects. The focus is on the study of strategies that allow learning to be autonomous, and incremental machine learning techniques that allow the robot to dynamically adapt to novel situations (i.e. novel objects, changes in the scene).

Possible topics include:
  • Perception of affordances, from object detection to detection of object parts;
  • Scene segmentation;
  • Robot self-perception for visual control of manipulation;
  • Data-driven approaches to object grasping.

This PhD project will be carried out within the Humanoid Sensing and Perception and Laboratory for Computational and Statistical Learning . Experiments will be done on the R1 and iCub humanoid robots.

Requirements: The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Machine Learning, Robotics and possibly in Computer Vision. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:
Maiettini, E., Pasquale, G., Rosasco, L., and Natale, L., Interactive Data Collection for Deep Learning Object Detectors on Humanoid Robots, in Proc. IEEE-RAS International Conference on Humanoid Robots, Birmingham, UK 2017.

Pasquale, G., Ciliberto, C., Odone, F., Rosasco, L., and Natale, L., Teaching iCub to recognize objects using deep Convolutional Neural Networks, in Proc. 4th Workshop on Machine Learning for Interactive Systems, 2015.

Camoriano, R., Pasquale, G., Ciliberto, C., Natale, L., Rosasco, L., and Metta, G., Incremental Robot Learning of New Objects with Fixed Update Time, in Proc. IEEE International Conference on Robotics and Automation, Singapore, 2017, pp. 3207-3214.

Contacts: Lorenzo Natale and Lorenzo Rosasco (name.surname@iit.it)


Multimodal object perception using vision and touch


Description: Touch and, in general, haptic information has been explored recently to overcome the limitation of conventional vision based interaction in robotics (Higy et al. 2016). This is because some material and object properties (e.g. object weight, roughness) or objects hidden by occlusion cannot be estimated with vision sensors. Most approaches in the literature, however, make use of tactile information in isolation and implement controlled strategies to explore objects and derive their properties.

The goals of this project are to implement autonomous exploration strategies driven by visual feedback (Vezzani et al. 2018) and investigate the integration of visual and haptic information for object discrimination. In the initial stage, we will consider the cases where multi-modal information is available during training and recognition. In the second stage, we will investigate how learning with multi-modal cues can help with object discrimination when only partial information is available (vision or touch). This research will be carried out on the iCub humanoid robot, which is equipped with vision cameras, tactile sensors (Jamali et al. 2015), position sensors and force/torque sensors. We will investigate methods for feature extraction considering features from Deep Convolutional Neural Networks (Pasquale et al. 2019) and machine learning methods (e.g. multi-view learning, learning with privileged information) for object discrimination using individual and combined features.

This PhD project will be carried out within the Humanoid Sensing and Perception and Laboratory for Computational and Statistical Learning . Experiments will be done on the R1 and iCub humanoid robots.

Requirements: The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Machine Learning, Robotics and possibly in Computer Vision. He should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:
Vezzani, G., Pattacini, U., Pasquale, G., and Natale, L., Improving Superquadric Modeling and Grasping with Prior on Object Shapes, in Proc. IEEE-RAS International Conference on Robotics and Automation, Brisbane, Australia, 2018.

Pasquale, G., Ciliberto, C., Odone, F., Rosasco, L., and Natale, L., Are we done with object recognition? The iCub robot’s perspective, Robotics and Autonomous Systems, vol. 112, pp. 260-281, 2019.

Higy, B., Ciliberto, C., Rosasco, L., and Natale, L., Combining Sensory Modalities and Exploratory Procedures to Improve Haptic Object Recognition in Robotics, in Proc. IEEE-RAS International Conference on Humanoid Robots, Cancun, Mexico, 2016, pp. 117-124.

Jamali, N., Maggiali, M., Giovannini, F., Metta, G., and Natale, L., A New Design of a Fingertip for the iCub Hand, in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 2015, pp. 1799-1805

Contacts: Lorenzo Natale and Lorenzo Rosasco (name.surname@iit.it)


Automated Planning under Uncertainties for Autonomous Robots


Description: As the robotics community develops more sophisticated perception, navigation, and manipulation methods, we would like to employ them in an automated planning framework to let robots carry out complex tasks autonomously. The complexity of such tasks derives from the very long time horizons and the fundamental uncertainty about the current state of the robot and the outcome of actions. In general, improving the robot's perception is insufficient to remove all of the uncertainties (Kaelbling et al.). The robot must to explicitly take actions to gain information: look in a drawer, remove an occluding object, or ask someone a question.

In automated planning, the importance of the action execution has been largely underestimated (Ghallab et al.), and insufficient attention has been given to the fundamental uncertainties above. Consequently, in contrast with successful AI fields such as machine learning, the deployment of automated planning techniques into robotics applications has remained relatively low despite its large potential.

The goal of this project is to develop an automated planning framework that takes into account the uncertainties to carry out complex tasks such as "find a lost key" or "bring me a soda". This research will be carried out on the R1 humanoid robot, which is equipped with vision cameras, tactile sensors, and position sensors. We will investigate also the use of different task representation methods to improve the planning framework.

Experiments will be done on the R1 and iCub humanoid robots.

Requirements: The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Robotics and AI Planning. He/she should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:
Kaelbling LP, Lozano-Pérez T. Integrated task and motion planning in belief space. The International Journal of Robotics Research. 2013 Aug;32(9-10):1194-227.

Ghallab M, Nau D, Traverso P. The actorʼs view of automated planning and acting: A position paper. Artificial Intelligence. 2014 Mar 1;208:1-7.

Contacts: Lorenzo Natale and Michele Colledanchise (name.surname@iit.it)


Active Touch and Behaviour


Description: This research theme is part of a Marie-Curie Skłodowska European Training Network on the development of neuromorphic tactile sensing for prosthetic and robotic applications www.neutouch.eu The goal of the research is the implementation of spiking tactile neural encoding on a humanoid robot, development of decoding strategies for perception and behavior generation. The robot will be a testbed to reproduce biological touch experiments (in collaboration with SISSA) of the perception of multiple stimulus properties arising from active object exploration, including light pressure, vibration, texture, lateral motion, and stretch. The task will be active exploration and modelling of an object using tactile and visual information. Visual information will be used to form an initial guess of the object shape, tactile information will be used to refine this information and complement it using features that characterize the local curvature of the object and areas that are not visible (due to occlusion). This project will use the algorithms developed within research themes 5 and 6 to classify local features from the object and machine learning models (e.g. Gaussian Processes) to model the object surface and provide an accurate shape. In the final part of the project, we will validate the surface reconstruction method in the context of object grasping, using state-of-the-art techniques that rely on object models. For comparison evaluation, we will use a dataset of objects for which accurate models are available (the Yale-CMU-Berkeley Object Data set, a dataset of object manipulation benchmarking).

This PhD project will be carried out within the Event Driven Perception for Robotics and the Humanoid Sensing and Perception laboratories. . Experiments will be done on the R1 and iCub humanoid robots.

Requirements: The ideal candidate should have a degree in Computer Science or Engineering (or equivalent) and background in Machine Learning, Robotics and possibly in Computer Vision. He/she should also be highly motivated to work on a robotic platform and have strong computer programming skills.

References:
Pasquale, G., Ciliberto, C., Odone, F., Rosasco, L., and Natale, L., Are we done with object recognition? The iCub robot’s perspective, Robotics and Autonomous Systems, vol. 112, pp. 260-281, 2019.

Higy, B., Ciliberto, C., Rosasco, L., and Natale, L., Combining Sensory Modalities and Exploratory Procedures to Improve Haptic Object Recognition in Robotics, in Proc. IEEE-RAS International Conference on Humanoid Robots, Cancun, Mexico, 2016, pp. 117-124.

C. Bartolozzi, P. M. Ros, F. Diotalevi, N. Jamali, L. Natale, M. Crepaldi, and D. Demarchi. Event-driven encoding of off-the-shelf tactile sensors for compression and latency optimisation for robotic skin. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 166–173, Sept 2017.

Contacts: Chiara Bartolozzi and Lorenzo Natale (name.surname@iit.it)