Our research focuses on
- Multi-modal perception and data fusion
- Reinforcement learning
- Data-driven decision making (3DM)
for autonomous systems and autonomous multi-agent systems.
Multi-modal perception and data fusion
- Perception modeling (lidar-radar-camera combinations)
- AI-based, physical sensor models, hybrid models
- Sensor fault prediction and detection
- Dynamic sensing of the vehicle environment considering detection uncertainties
- Fusion of different sensor sources to increase reliability of autonomous systems (probabilistic methods, evidence theory-based, semantics-oriented, etc.)
- Statistical methods for data association
- Dynamic occupancy grid filtering
- Multiple target tracking (MTT) and extended object tracking
- Multiple detections per object and from multiple targets
- E.g., Gaussian mixture probability hypothesis density (phd) tracker or Gamma Gaussian inverse Wishart tracker
Reinforcement learning (RL)
- Bayesian modeling and simulation
- Dynamic programming
- Learning from limited samples / sample efficiency
- Interacting with systems that are partially observable
- Learning from multi-objective or poorly specified reward functions
- Stochastic environments
- Many real-world environments are stochastic, making it difficult for RL agents to learn effectively
- Transfer learning, i.e. transfer of knowledge from one task or environment to another
- RL in path planning of autonomous systems
- RL for multi-agent systems
- Deep learning implementation
Decision making under uncertainty
- (Partial observable) Markov decision processes (POMDP and MDP)
- Modeling of uncertainties in perception and signal propagation (model and state uncertainty)
- Simulation of collaborative autonomous multi-agent systems
- Distributed and collaborative decision making
- Data-driven decision making (3DM) and decision making support capitalizing on AI
- Large language models (LLMs)
.