Diego Resende Faria, Ph.D.
   Lecturer in Computer Science
   School of Engineering and Applied Science
   Aston University, Birmingham, UK

 

  scroll with the mouse to navigate and select an option at the main menu below









 Research topics   


 Probabilistic Human Daily Activity Recognition towards Robot-assisted Living                                                                              

   Objectives
    - Develop a probabilistic framework for Human-Robot Interaction useful for robot-assisted living;
    - Develop an autonomous system with machine learning techniques in order to exploit different sensory information;
    - Extract human actions through body motion patterns;
    - Decision making to offer proactive initiative to prompt and support a human in an indoor environment. 

    

Achievements: A novel framework for human Activity Recognition (AR) is addressed, where a probabilistic ensemble of classifiers called Dynamic Mixture Model (DBMM) was proposed. The DBMM relies on the computed confidence belief from multiple base classifiers, combining these likelihoods into a single form by assigning weights from an uncertainty measure to counterbalance the posterior probability. Discriminative spatio-temporal features are extracted from human skeleton given RGB-D data. Assessment on well-known human daily activity datasets CAD60: Cornell Activity Dataset; Univeristy of Texas: UTKinect; Microsoft Research: MSR-Action3D and MSR-DailyActivity, and also using a mobile robot for assisted living were successfully carried out with overall accuracy greater than 90%. A real time application for AR including risk situations was implemented in ROS (Robot Operating System) useful for robot-assisted living.


An overview of the DBMM approach for human daily activity recognition is presented below.
Dynamic Bayesian Mixture Model Approach. (click on the Figure to see the figure with higher resolution)


 Future Work
     - Extend the activity recognition framework to recognize more than one activity happening in parallel in the robot's field of view (two or more subjects);
    - Extend the activity recognition framework for social behavior classification.


 Videos / Source Code / Dataset
https://youtu.be/FAfLj28_iSM https://youtu.be/xVQtIAXjsZw

Video 1:  Human daily activity recognition for robot-assisted Living

Video 2: Activity recognition - anticipating human trajectory  to avoid collision

Dataset 

Source Code 

 


 Publications (PDFs are available at publications page, accessing the main menu)
  • D. R. Faria, C. Premebida, U. Nunes. "Dynamic Bayesian Mixture Model: Probabilistic Classification for Human Daily Activity Recognition", submitted to a journal  (Under Review).
  • Diego R. Faria, Mario Vieira, Cristiano Premebida, Urbano Nunes, "Probabilistic Human Daily Activity Recognition towards Robot-assisted Living".  Proceedings of IEEE RO-MAN'15:  IEEE International Symposium on Robot and Human Interactive Communication. Kobe, Japan, August,  2015.
  • Mario Vieira, Diego R. Faria, Urbano Nunes, "Real-time Application for Monitoring Human Daily Activities and Risk Situations in Robot-assisted Living".  Proceedings of Robot'15:  2nd Iberian Robotics Conference. Lisbon, Portugal 2015.
  • Diego R. Faria,Cristiano Premebida, Urbano Nunes - "A Probabilistic Approach for Human Everyday Activities Recognition using Body Motion from RGB-D Images".  IEEE RO-MAN'14:  IEEE International Symposium on Robot and Human Interactive Communication. Edinburgh-Scotland, UK, August 25-29th,  2014. * Finalist for Kazuo Tanie Award  (focusing on practical application or one that can be applied to real products).
       MSc Thesis
  • M. Vieira. "Recognition of Daily Activities and Risk Situations towards Robot-assisted Living". M.Sc. Thesis. Universidade de Coimbra, Portugal, 2015. Supervised by Dr Diego R. Faria and Professor Urbano Nunes.

Child-Robot Interaction                                                                                                                                                              
 

 

General ObjectiveInterdisciplinary work involving experts from robotics and psychology applied to the context of child-robot interaction towards assisting the facilitation of adaptive health-related coping and improved quality of life outcomes in pediatric settings: robotherapy

  • Scientific/Technological goals:
     - Develop a framework for child emotional state estimation: combining facial expressions and body language (motion);
    - Develop an autonomous system with machine learning techniques in order to exploit different sensory information;
    - Develop a module for robot reactions accordingly to the current child's  emotional state;
    - Develop a Learning by Imitation framework based on Kinesthetic movements.
  
  

Achievements: We have started to program the humanoid robot NAO (Aldebaran robotics) to endow the robot with enough skills to interact with children. Initially, we have prepared a script to control the robot actions and reactions during the child-robot interaction online by tele-operation, where an expert in robotics selects appropriate reactions given the inputs and feedback of the child (talks and gestures), however, still with some autonomous decisions for reactions given the child feedback. Experiments with six children (e.g. boys and girls between 5 and 8 years old) were carried out. We followed the strategy of covering different types of interaction such as: verbal, gestural and physical interaction. These experiments were followed by a psychologist to assess the child's performance, reactions and acceptance of the child and their parents during the CRI with the NAO robot. Questionnaires were applied to the children’s parents to quantify the aforementioned parameters.

CRI: verbal communication, gestural interaction and imitation (playing games, talking and doing stretching).

 

  •     Psychology side:         
            - Categorize the emotional reactions of child and parents; 
            - Assess child/parents acceptance about the robot interaction;
            - Analyze the concordance degree of emotional responses between parents and child. 

  Emotional Reactions to Child-Robot Interaction: An Exploratory Study (Carried out by Dr Carlos Carona (psychologist) - CINEICC, Portugal 

 Background: Child-Robot Interaction (CRI) has been conceptualized as an intervention context, with a number of potential applications in school (e.g., modeling learning processes, development of perceived self-efficacy), therapy (e.g., facilitation of coping skills), and health promotion settings (e.g., training of health-related skills). Current empirical evidence suggests that the use of robots in therapeutic context is likely to enhance the outcomes of cognitive-behavioral interventions, for example. However, as the utilization of robots for promoting child development and adaptation raises a number of ethical and pragmatic questions, paired with the fact that mothers assuming the role of their child’s primary caregivers are the most important attachment figures for modeling children’s emotional and behavioral reactions to stressful or strange situations, current is empirical evidence is scarce as regards the psychometric assessment of mothers’ and their children’s emotional reactions to CRI. 
 
 Objectives:
 This exploratory was aimed at examining mothers’ and their children’s subjective emotional reactions to a structured ludic experience of CRI: first, by identifying the most frequent emotional responses experienced immediately after CRI; and then, by analyzing the differences between mothers’ and their children’s emotional reactions.
 
 Method:
The Emotional Assessment Scale (EAS) – a visual-analogical instrument (intensity of emotional responses coded from 0 to 100) designed for assessing the emotional reactions of Surprise, Fear, Anger, Guilt, Anxiety, Sadness, Disgust and Happiness – was administered to a convenience sample of mothers and one of their children (aged between 5 and 8 years old; n = 12), immediately after a structured ludic CRI experience (i.e., introducing dialogue, game of recognition of figures, dance and physical exercises). This scale was administered to mothers in both self-report (i.e., assessing their own emotional reactions) and proxy-report (i.e., assessing their children’s emotional reactions) formats. Cronbach’s alphas were computed to assess the instrument’s reliability in these study’s samples, and Wilcoxon Signed-Rank Nonparametric Test was used to detect any differences between the emotional reactivity experienced by mothers and that reported to be experienced by their children.
 
 Preliminary Results:
 The obtained sample included children of both genders (50% girls, 50% boys), with a mean age of 6,7 years (SD = 1,2). For assessing the global construct of “emotional reactivity”, excellent and acceptable levels of internal consistency were respectively observed for the samples of children (α = .87) and their mothers (α = .68). The most frequent emotional reactions (≥ 60% of reported intensity) experienced by children and their mothers were: Surprise (M = 65,3; DP = 17,4 / M = 64,6; DP = 18,9) and Happiness (M = 74,8; DP = 10,3 / M = 85,4; DP = 3,3). With the exception of Anxiety (M = 29,2; DP = 25,9 / M = 8,9; DP = 12,5), for the remaining emotions of Fear, Anger, Guilt, Sadness and Disgust, a mean of reported intensity inferior to 20% was observed for both children and their mothers. There were no significant differences between mothers and their children in the levels of experiencing Surprise, Anger, Sadness, Disgust and Happiness; however, children were reported to experience more Fear (Z = -2,02; p = .04) and Anxiety (Z = -2,20; p = .03) , and less Guilt (Z = -2,02; p = .04), than their mothers.
EAS: Intensity of Emotional Reactivity during CRII

 Discussion: These preliminary results suggest that CRI may be a context where positive emotions, such as happiness and surprise, tend to be elicited in both children and their mothers. The observed differences in the experience of Anxiety and Fear, may reflect the distinctive adaptive function that those emotions assume when children, at this developmental stage, face new, unknown situations. 

Work still under progress.

Grasping and Dexterous Manipulation                                                                                                                                          

 

How can we endow an artificial system with appropriate cognitive skills (i.e., advanced perception capabilities) in order to grasp and manipulate everyday objects in the most autonomous and natural way possible?

 In order to answer this question, this research was based on the fact that humans excel when dealing with everyday manipulation tasks, and are also able to learn new skills to adapt to different complex environments. Human abilities result from lifelong learning, and also from the observation of other skilled humans. To obtain similar dexterity with robotic hands, cognitive capacity is needed to deal with uncertainty. By extracting relevant multi-sensor information from the environment (objects), knowledge from previous grasping tasks can be generalized to be applied within different contexts. In examining this strategy, my research has shown that learning from human experiences is an alternative to accomplishing the goal of robot grasp synthesis for unknown objects. During my Ph.D. research different subtopics of the grasping area were studied; the interrelation between them is demonstrated in Figure 1. Applications for in-hand exploration of objects to represent the object shape by using a probabilistic volumetric map was proposed and also object identification by in-hand exploration using a probabilistic approach (Gaussian Mixture Models for learning and signatures acquired from Gaussian Mixture Regression). Task modeling (at trajectory level: 3D movements) and recognition were developed using Bayesian techniques.

 

Interrelation between grasp topics.

 

Some topics addressed in the past:

  • ´Grasp Synthesis: Generating proper grasps given an object point loud

 

   

 

 

 

  • Object In-hand Exploration: Probabilistic Object Volumetric Map (occupancy grid)

 

       

 

 

  • Combining Visual and In-hand Manipulation Data using Occupancy Grid

 

  • Object Segmentation (i) GMM and (ii) simple segmentation based on Geometrical properties

  • Representation by Superquadrics towards Estimating Graspable Regions

 

  • Associating Grasp Types and Object Regions based on Human Demonstrations

 

  • Learning and Estimation of Object Graspable Regions using Bayesian Techniques

 

 

 

  • Simulating Grasp Synthesis given an unknown object using a Dexterous Hand (Shadow Hand) 

    • Segmenting Object Point Cloud

    • Shape Representation by SQ

    • Estimating Graspable Regions

    • Estimating Possible Grasp Types

    • Using Eigen Grasps to Map Grasp Types to the Robotic Hand 

 

 

 

  • Experiments  using a Dexterous Robotic Hand (Shadow Hand): Estimating the Grasp Type given the object point cLoud

  

 

  • Exploring the object shape to recognize it (using grasp types and object point cloud)

 

 

  • Using Multimodal Data to Extract Relevant Information about Grasping

 

 

  • Segmenting Hand Motions and Recognizing Grasp Trajectories

 

 

 

 

 

 

Publications (PDFs are available at publications page, accessing the main menu)


Semantic Place Recognition in Mobile Robotics                                                                                                            

Work in Collaboration with Dr Cristiano Premebida

General ObjectivesAddress the problem of semantic place categorization in mobile robotics by considering a time-based probabilistic approach called Dynamic Bayesian Mixture Model (DBMM), which is an improved variation of the Dynamic Bayesian Network (DBN). More specifically, multi-class semantic classification is performed by a DBMM composed of a mixture of heterogeneous base classifiers, using geometrical features computed from 2D laser scanner data, where the sensor is mounted on-board a moving robot operating indoors. Besides its capability to combine different probabilistic classifiers, the DBMM approach also incorporates time-based (dynamic) inferences in the form of previous class-conditional probabilities and priors.

AchievementsExtensive experiments were carried out on publicly available benchmark datasets (Image Database for rObot Localization: IDOL and COLD Saarbrücken dataset, both were acquired using a mobile robot) highlighting the influence of the number of time-slices and the effect of additive smoothing on the classification performance of the proposed approach. Reported results, under different scenarios and conditions, show the effectiveness and competitive performance of the DBMM using different time slices. The accuracy of classification was measured by the Fmeasure (F1 score: using the precision and recall measures), and the overall performance  was greater than 90%.

Illustrative representation of the DBMM approach with time-slices. The posterior depends on the priors P(Ck), the combined probabilities from the base-classifiers and the normalization factor (beta).

 

Results show the evolution of the Fmeasure per values of alpha and timeslices T = [0; ... ;4] for the four experiments on the IDOL dataset (see at publications). The curves on the graphic clearly demonstrate improvement on the performance of the DBMM when the ‘dynamic’ part is considered.  - please check out the publications to see the results.

 

For More Details about this research and results check out the publications listed below.

Publications

  • C. Premebida, D. R. Faria, U. Nunes. "Dynamic Bayesian Network for Semantic Place Classification in Mobile Robotics", AURO Springer: Autonomous Robotics, 2015. (Under Review)

  • C. Premebida, D. R. Faria, F. A. de Souza, U. Nunes, "Applying Probabilistic Mixture Models to Semantic Place Classification in Mobile Robotics". Proceedings of IEEE IROS'15: IEEE International Conference on Intelligent Robots and Systems. Hamburg, Germany, 2015.


                                       Research Statement: past, current and future research