In the last few years, the RoboHelper project (supported by NSF award IIS 0905593) has explored the development of robots tailored to the needs of the elderly (Di Eugenio et al., 2010a). We collected the multimodal ELDERLY-AT-HOME corpus, where one assistant collaborates with an elderly person in performing Activities of Daily Living (ADLs). The project has focused on building a multimodal interface for communication between the elderly person and the robot, since our data collection confirms that beyond language, gestures and haptic actions (gestures that involve touch) are pervasive in this sort of interaction. The corpus has been annotated for a variety of information and will be made available in due course. We have developed a multimodal dialogue manager that performs multimodal reference resolution (Chen & Di Eugenio, 2012), models the fact that these interactions comprise not only dialogue acts but also physical actions, and predicts the next dialogue act on the basis of the preceding multimodal signals (Chen & Di Eugenio, 2013).