mirror of
https://github.com/soconnor0919/honors-thesis.git
synced 2026-02-04 21:46:31 -05:00
chapters 1-3 drafted
This commit is contained in:
@@ -2,10 +2,23 @@
|
||||
\label{ch:intro}
|
||||
|
||||
\section{Motivation}
|
||||
% TODO
|
||||
|
||||
To build the social robots of tomorrow, researchers must find ways to convincingly simulate them today. The process of designing and optimizing interactions between human and robot is essential to the field of Human-Robot Interaction (HRI), a discipline dedicated to ensuring these technologies are safe, effective, and accepted by the public. However, current practices for prototyping these interactions are often hindered by complex technical requirements and inconsistent methodologies.
|
||||
|
||||
In a typical social robotics interaction, a robot operates autonomously based on pre-programmed behaviors. Because human interaction is inherently unpredictable, pre-programmed autonomy often fails to respond appropriately to subtle social cues, causing the interaction to degrade. To overcome this, researchers utilize the Wizard-of-Oz (WoZ) technique, where a human operator--the ``wizard''--controls the robot's actions in real-time, creating the illusion of autonomy. This allows for rapid prototyping and testing of interaction designs before the underlying artificial intelligence is fully matured.
|
||||
|
||||
Despite its versaility, WoZ research faces two critical challenges. First, a high technical barrier prevents many non-programmers, such as experts in psychology or sociology, from conducting their own studies without engineering support. Second, the hardware landscape is highly fragmented. Researchers frequently build bespoke, ``one-off'' control interfaces for specific robots and specific experiments. These ad-hoc tools are rarely shared, making it difficult for the scientific community to replicate studies or verify findings. This has led to a replication crisis in HRI, where a lack of standardized tooling undermines the reliability of the field's body of knowledge.
|
||||
|
||||
\section{HRIStudio Overview}
|
||||
% TODO
|
||||
|
||||
To address these challenges, this thesis presents HRIStudio, a web-based platform designed to manage the entire lifecycle of a WoZ experiment: from interaction design, through live execution, to final analysis.
|
||||
|
||||
HRIStudio is built on three core design principles: disciplinary accessibility, scientific reproducibility, and platform sustainability. To achieve accessibility, the platform replaces complex code with a visual, drag-and-drop interface, allowing domain experts to design interaction flows much like creating a storyboard. To ensure reproducibility, HRIStudio enforces a structured experimental workflow that acts as a ``smart co-pilot'' for the wizard. It guides them through a standardized script to minimize human error while automatically logging synchronized data streams for analysis. Finally, unlike tools tightly coupled to specific hardware, HRIStudio utilizes a robot-agnostic architecture to ensure sustainability. This design ensures that the platform remains a viable tool for the community even as individual robot platforms become obsolete.
|
||||
|
||||
\section{Research Objectives}
|
||||
% TODO
|
||||
|
||||
The primary objective of this work is to demonstrate that a unified, web-based software framework can significantly improve both the accessibility and reproducibility of HRI research. Specifically, this thesis aims to develop a production-ready platform, validate its accessibility for non-programmers, and assess its impact on experimental rigor.
|
||||
|
||||
First, this work translates the foundational architecture proposed in prior publications into a stable, full-featured software platform capable of supporting real-world experiments. Second, through a formal user study, we evaluate whether HRIStudio allows participants with no robotics experience to successfully design and execute a robot interaction, comparing their performance against industry-standard software. Finally, we quantify the impact of the platform's guided execution features on the consistency of wizard behavior and the accuracy of data collection.
|
||||
|
||||
This work builds upon preliminary concepts reported in two peer-reviewed publications \cite{OConnor2024, OConnor2025}. It extends that research by delivering the complete implementation of the system and a comprehensive empirical evaluation of its efficacy.
|
||||
|
||||
@@ -4,8 +4,18 @@
|
||||
\section{Human-Robot Interaction and Wizard-of-Oz}
|
||||
% TODO
|
||||
|
||||
\section{Project Context}
|
||||
% TODO
|
||||
HRI is a multidisciplinary field dedicated to understanding, designing, and evaluating robotic systems for use by or with humans. Unlike industrial robotics, where safety often means physical separation, social robotics envisions a future where robots operate in shared spaces, collaborating with people in roles ranging from healthcare assistants and educational tutors to customer service agents.
|
||||
|
||||
For these interactions to be effective, robots must exhibit social intelligence. They must recognize and respond to human social cues--such as speech, gaze, and gesture--in a manner that is natural and intuitive. However, developing the artificial intelligence required for fully autonomous social interaction is an immense technical challenge. Perception systems often struggle in noisy environments, and natural language understanding remains an area of active research.
|
||||
|
||||
To bridge the gap between current technical limitations and desired interaction capabilities, researchers employ the WoZ technique. In a WoZ experiment, a human operator (the ``wizard'') remotely controls the robot's behaviors, unaware to the study participant. To the participant, the robot appears to be acting autonomously. This methodology allows researchers to test hypotheses about human responses to robot behaviors without needing to solve the underlying engineering challenges first.
|
||||
|
||||
\section{Prior Work}
|
||||
% TODO
|
||||
|
||||
This thesis represents the culmination of a multi-year research effort to address critical infrastructure gaps in the HRI community. The ideas presented here build upon a foundational trajectory established through two peer-reviewed publications.
|
||||
|
||||
We first introduced the concept for HRIStudio as a Late-Breaking Report at the 2024 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) \cite{OConnor2024}. In that work, we identified the lack of accessible tooling as a primary barrier to entry in HRI and proposed the high-level vision of a web-based, collaborative platform. We established the core requirements for the system: disciplinary accessibility, robot agnosticism, and reproducibility.
|
||||
|
||||
Following the initial proposal, we published the detailed system architecture and preliminary prototype as a full paper at RO-MAN 2025 \cite{OConnor2025}. That publication validated the technical feasibility of our web-based approach, detailing the communication protocols and data models necessary to support real-time robot control using standard web technologies.
|
||||
|
||||
While those prior publications established the ``what'' and the ``how'' of HRIStudio, this thesis focuses on the realization and validation of the platform. We extend our previous research in two key ways. First, we move beyond prototypes to deliver a complete, production-ready software platform (v1.0), resolving complex engineering challenges related to stability, latency, and deployment. Second, and crucially, we provide the first rigorous user study of the platform. By comparing HRIStudio against industry-standard tools, this work provides empirical evidence to support our claims of improved accessibility and experimental consistency.
|
||||
|
||||
@@ -2,13 +2,16 @@
|
||||
\label{ch:related_work}
|
||||
|
||||
\section{Existing Frameworks}
|
||||
% TODO
|
||||
|
||||
The HRI community has a long history of developing custom tools to support WoZ studies. Early efforts focused on providing robust interfaces for technical users. For example, Polonius \cite{Lu2011} was designed to give robotics engineers a flexible way to create experiments for their collaborators, emphasizing integrated logging to streamline analysis. Similarly, OpenWoZ \cite{Hoffman2016} introduced a cloud-based, runtime-configurable architecture that allowed researchers to modify robot behaviors on the fly. These tools represented significant advancements in experimental infrastructure, moving the field away from purely hard-coded scripts. However, they largely targeted users with significant technical expertise, requiring knowledge of specific programming languages or network protocols to configure and extend.
|
||||
|
||||
\section{General vs. Domain-Specific Tools}
|
||||
% TODO
|
||||
|
||||
A recurring tension in the design of HRI tools is the trade-off between specialization and generalizability. Some tools prioritize usability by coupling tightly with specific hardware. WoZ4U \cite{Rietz2021}, for instance, provides an intuitive graphical interface specifically for the Pepper robot, making it accessible to non-technical researchers but unusable for other platforms. Manufacturer-provided software like Choregraphe \cite{Pot2009} for the NAO robot follows a similar pattern: it offers a powerful visual programming environment but locks the user into a single vendor's ecosystem. Conversely, generic tools like Ozlab seek to support a wide range of devices but often struggle to maintain relevance as hardware evolves \cite{Pettersson2015}. This fragmentation forces labs to constantly switch tools or reinvent infrastructure, hindering the accumulation of shared methodological knowledge.
|
||||
|
||||
\section{Methodological Critiques}
|
||||
% TODO
|
||||
|
||||
Beyond software architecture, the methodological rigor of WoZ studies has been a subject of critical review. In a seminal systematic review, Riek \cite{Riek2012} analyzed 54 HRI studies and uncovered a widespread lack of consistency in how wizard behaviors were controlled and reported. The review noted that very few researchers reported standardized wizard training or measured wizard error rates, raising concerns about the internal validity of many experiments. This lack of rigor is often exacerbated by the tools themselves; when interfaces are ad-hoc or poorly designed, they increase the cognitive load on the wizard, leading to inconsistent timing and behavior that can confound study results.
|
||||
|
||||
\section{Research Gaps}
|
||||
% TODO
|
||||
|
||||
Reference in New Issue
Block a user