\chapter{Related Work and State of the Art} \label{ch:related_work} \section{Existing Frameworks} The HRI community has a long history of developing custom tools to support WoZ studies. Early efforts focused on providing robust interfaces for technical users. For example, Polonius \cite{Lu2011} was designed to give robotics engineers a flexible way to create experiments for their collaborators, emphasizing integrated logging to streamline analysis. Similarly, OpenWoZ \cite{Hoffman2016} introduced a cloud-based, runtime-configurable architecture that allowed researchers to modify robot behaviors on the fly. These tools represented significant advancements in experimental infrastructure, moving the field away from purely hard-coded scripts. However, they largely targeted users with significant technical expertise, requiring knowledge of specific programming languages or network protocols to configure and extend. \section{General vs. Domain-Specific Tools} A recurring tension in the design of HRI tools is the trade-off between specialization and generalizability. Some tools prioritize usability by coupling tightly with specific hardware. WoZ4U \cite{Rietz2021}, for instance, provides an intuitive graphical interface specifically for the Pepper robot, making it accessible to non-technical researchers but unusable for other platforms. Manufacturer-provided software like Choregraphe \cite{Pot2009} for the NAO robot follows a similar pattern: it offers a powerful visual programming environment but locks the user into a single vendor's ecosystem. Conversely, generic tools like Ozlab seek to support a wide range of devices but often struggle to maintain relevance as hardware evolves \cite{Pettersson2015}. This fragmentation forces labs to constantly switch tools or reinvent infrastructure, hindering the accumulation of shared methodological knowledge. \section{Methodological Critiques} Beyond software architecture, the methodological rigor of WoZ studies has been a subject of critical review. In a seminal systematic review, Riek \cite{Riek2012} analyzed 54 HRI studies and uncovered a widespread lack of consistency in how wizard behaviors were controlled and reported. The review noted that very few researchers reported standardized wizard training or measured wizard error rates, raising concerns about the internal validity of many experiments. This lack of rigor is often exacerbated by the tools themselves; when interfaces are ad-hoc or poorly designed, they increase the cognitive load on the wizard, leading to inconsistent timing and behavior that can confound study results. \section{Research Gaps} Despite the rich landscape of existing tools, a critical gap remains for a platform that is simultaneously accessible, reproducible, and sustainable. Existing accessible tools are often too platform-specific to be widely adopted, while flexible, general-purpose frameworks often present a prohibitively high technical barrier. Furthermore, few tools directly address the methodological crisis identified by Riek by enforcing standardized protocols or actively guiding the wizard during execution. HRIStudio aims to fill this void by providing a web-based, robot-agnostic platform that not only lowers the barrier to entry for interdisciplinary researchers but also embeds methodological best practices directly into the experimental workflow.