Advanced assistive technologies like robots must go beyond being merely service-on-request devices but should be equipped with abilities to coexist in social environments. Many development activities concentrate on refining robot capabilities for understanding complex social nuances. In this paper, we argue for shifting focus; rather than aiming for flawless operation through optimized context understanding, strong emphasis should be put into designing robot systems for intervenability. The concept of intervenability, primarily known from the IT security and privacy domain, describes a systemic property which allows humans to meaningfully step-in robotic operation. To contextualize intervenability design relative to other approaches, we position it within a design space that represents development processes along optimizing opportunities and mitigating risks. We show that common boundaries can be overcome as it enables delegation of some context understanding complexity to humans in-situ. Examples from the domain of robotic life assistants will be used to illustrate the argumentation.