[CaCL] This Thursday's CaCL

King, David L. king.2138 at buckeyemail.osu.edu
Sun Nov 10 20:31:37 EST 2019


https://arxiv.org/abs/1906.00347
Are You Looking? Grounding to Multiple Modalities in Vision-and-Language Navigation
Ronghang Hu<https://arxiv.org/search/cs?searchtype=author&query=Hu%2C+R>, Daniel Fried<https://arxiv.org/search/cs?searchtype=author&query=Fried%2C+D>, Anna Rohrbach<https://arxiv.org/search/cs?searchtype=author&query=Rohrbach%2C+A>, Dan Klein<https://arxiv.org/search/cs?searchtype=author&query=Klein%2C+D>, Trevor Darrell<https://arxiv.org/search/cs?searchtype=author&query=Darrell%2C+T>, Kate Saenko<https://arxiv.org/search/cs?searchtype=author&query=Saenko%2C+K>

Vision-and-Language Navigation (VLN) requires grounding instructions, such as "turn right and stop at the door", to routes in a visual environment. The actual grounding can connect language to the environment through multiple modalities, e.g. "stop at the door" might ground into visual objects, while "turn right" might rely only on the geometric structure of a route. We investigate where the natural language empirically grounds under two recent state-of-the-art VLN models. Surprisingly, we discover that visual features may actually hurt these models: models which only use route structure, ablating visual features, outperform their visual counterparts in unseen new environments on the benchmark Room-to-Room dataset. To better use all the available modalities, we propose to decompose the grounding procedure into a set of expert models with access to different modalities (including object detections) and ensemble them at prediction time, improving the performance of state-of-the-art models on the VLN task.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.osu.edu/pipermail/cacl/attachments/20191111/dd57c8bc/attachment.html>


More information about the CaCL mailing list