Research
I'm interested in computer vision, machine learning and robotics. In particular, I mainly work on scene representations for robotics, along with modular and interpretable policies for performing tasks using these scene representations.
|
|
Zero-shot Object-Centric Instruction Following: Integrating Foundation Models with Traditional Navigation
Sonia Raychaudhuri, Duy Ta, Katrina Ashton, Angel X. Chang, Jiuguang Wang, Bernadette Bucher
arXiv, 2025
arxiv /
website /
A zero-shot method to ground natural language instructions to a factor graph and a policy to use this for object-centric Vision-and-Language Navigation
|
|
Multimodal LLM Guided Exploration and Active Mapping using Fisher Information
Wen Jiang*, Boshu Lei*, Katrina Ashton, Kostas Daniilidis
arXiv, 2024
arxiv /
An active Gaussian Splatting SLAM system with active exploration strategy which balances the accuracy of mapping and localization
|
|
Equivariant Filter for Feature-Based Homography Estimation for General Camera Motion
Tarek Bouazza, Katrina Ashton, Pieter van Goor, Robert Mahony, Tarek Hamel
Conference on Decision and Control (CDC), 2024
paper /
Homography estimation for arbitrary trajectories using the Equivariant Filter framework by exploiting the Lie group structure of SL(3)
|
|
Unordered Navigation to Multiple Semantic Targets in Novel Environments
Bernadette Bucher*, Katrina Ashton*, Bo Wu, Karl Schmeckpeper, Nikolai Matni, Georgios Georgakis, Kostas Daniilidis
CVPR Embodied AI Workshop, 2023
paper /
Proposing an objective to extend an objectnav method to unordered multi-object navigation
|
|
An observer for infinite dimensional 3d surface reconstruction that converges in finite time
Sean G. P. O’Brien, Katrina Ashton, Jochen Trumpf
IFAC-PapersOnLine, 2020
paper /
An observer for reconstructing the dense structure of scenes from visual or depth sensors that provably converges in finite time, demonstrated with real light-field camera data
|
|