mirror of
https://github.com/soconnor0919/honors-thesis.git
synced 2026-05-08 15:18:54 -04:00
Compare commits
1 Commits
96057e1bf8
...
0e15b16c9b
| Author | SHA1 | Date | |
|---|---|---|---|
| 0e15b16c9b |
@@ -1,7 +1,9 @@
|
|||||||
\chapter{Implementation}
|
\chapter{Implementation}
|
||||||
\label{ch:implementation}
|
\label{ch:implementation}
|
||||||
|
|
||||||
Chapter~\ref{ch:design} described the conceptual design of HRIStudio. This chapter addresses the realization of these design principles, discussing the core technologies used, the system architecture that integrates these technologies, and the current state of the implementation. The implementation demonstrates the feasibility of the approach proposed in earlier chapters while identifying technical challenges that inform the roadmap for future development.
|
Chapter~\ref{ch:design} described the conceptual design of HRIStudio. This chapter addresses the realization of these design principles, discussing the core technologies used, the system architecture that integrates these technologies, and the current state of the implementation.
|
||||||
|
|
||||||
|
The implementation demonstrates the feasibility of the proposed framework through a fully functional reference system. The work validates three key hypotheses: (1) that web technologies can achieve the real-time responsiveness required for live Wizard-of-Oz experiments, (2) that a plugin architecture can abstract robot-specific control without limiting expressiveness, and (3) that comprehensive event logging can be achieved automatically without requiring researchers to instrument their experiments. The following sections detail how these design principles were realized in practice.
|
||||||
|
|
||||||
\section{Core Implementation Decisions}
|
\section{Core Implementation Decisions}
|
||||||
|
|
||||||
@@ -19,7 +21,7 @@ Experiment protocols and trial data are stored in a structured database that sup
|
|||||||
|
|
||||||
\subsection{Robot Communication Layer}
|
\subsection{Robot Communication Layer}
|
||||||
|
|
||||||
Rather than writing custom code to communicate with each robot's specific control system, HRIStudio uses the Robot Operating System (ROS)~\cite{Quigley2009} as an intermediary. ROS is a widely-adopted standard in robotics research that provides a common communication framework. This design decision means that any robot with ROS support can work with HRIStudio. For robots without native ROS support, researchers can write a small adapter, a much simpler task than integrating directly with HRIStudio's core code.
|
Rather than writing custom code to communicate with each robot's specific control system, HRIStudio uses a standard robotics communication framework as an intermediary. This design decision means that any robot that supports this framework can work with HRIStudio. For robots without native support, researchers can write a small adapter, a much simpler task than integrating directly with HRIStudio's core code.
|
||||||
|
|
||||||
\subsection{Plugin Architecture for Platform Agnosticism}
|
\subsection{Plugin Architecture for Platform Agnosticism}
|
||||||
|
|
||||||
@@ -40,22 +42,22 @@ The plugin architecture also treats control flow (branches, loops, conditional l
|
|||||||
|
|
||||||
% First Y: speak()
|
% First Y: speak()
|
||||||
\node[action] (a1) at (0, 7) {HRIStudio\\speak(text)};
|
\node[action] (a1) at (0, 7) {HRIStudio\\speak(text)};
|
||||||
\node[impl] (nao1) at (-2, 5) {NAO\\{\small /nao/tts}};
|
\node[impl] (nao1) at (-2, 5) {NAO\\{\small robot-specific}};
|
||||||
\node[impl] (pep1) at (2, 5) {Pepper\\{\small /pepper/say}};
|
\node[impl] (pep1) at (2, 5) {Pepper\\{\small robot-specific}};
|
||||||
\draw[arrow] (a1) -- (nao1);
|
\draw[arrow] (a1) -- (nao1);
|
||||||
\draw[arrow] (a1) -- (pep1);
|
\draw[arrow] (a1) -- (pep1);
|
||||||
|
|
||||||
% Second Y: raise_arm()
|
% Second Y: raise_arm()
|
||||||
\node[action] (a2) at (0, 3) {HRIStudio\\raise\_arm()};
|
\node[action] (a2) at (0, 3) {HRIStudio\\raise\_arm()};
|
||||||
\node[impl] (nao2) at (-2, 1) {NAO\\{\small /nao/arm}};
|
\node[impl] (nao2) at (-2, 1) {NAO\\{\small robot-specific}};
|
||||||
\node[impl] (pep2) at (2, 1) {Pepper\\{\small /pepper/gesture}};
|
\node[impl] (pep2) at (2, 1) {Pepper\\{\small robot-specific}};
|
||||||
\draw[arrow] (a2) -- (nao2);
|
\draw[arrow] (a2) -- (nao2);
|
||||||
\draw[arrow] (a2) -- (pep2);
|
\draw[arrow] (a2) -- (pep2);
|
||||||
|
|
||||||
% Third Y: move_forward()
|
% Third Y: move_forward()
|
||||||
\node[action] (a3) at (0, -1) {HRIStudio\\move\_forward()};
|
\node[action] (a3) at (0, -1) {HRIStudio\\move\_forward()};
|
||||||
\node[impl] (nao3) at (-2, -3) {NAO\\{\small /nao/move}};
|
\node[impl] (nao3) at (-2, -3) {NAO\\{\small robot-specific}};
|
||||||
\node[impl] (pep3) at (2, -3) {Pepper\\{\small /pepper/cmd\_vel}};
|
\node[impl] (pep3) at (2, -3) {Pepper\\{\small robot-specific}};
|
||||||
\draw[arrow] (a3) -- (nao3);
|
\draw[arrow] (a3) -- (nao3);
|
||||||
\draw[arrow] (a3) -- (pep3);
|
\draw[arrow] (a3) -- (pep3);
|
||||||
|
|
||||||
@@ -164,19 +166,23 @@ This design ensures comprehensive documentation of every trial, supporting both
|
|||||||
\label{fig:trial-dataflow}
|
\label{fig:trial-dataflow}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
|
|
||||||
\section{Implementation Status}
|
\section{Validation Through Deployment}
|
||||||
|
|
||||||
The core architectural components of HRIStudio have been implemented and validated. The framework successfully instantiates the design principles described earlier, demonstrating the feasibility of the approach and highlighting technical challenges to be addressed in future work.
|
The HRIStudio platform was implemented as a complete, functional reference system and validated through deployment with a physical NAO6 robot. This section documents what was built and what was demonstrated.
|
||||||
|
|
||||||
\begin{description}
|
\begin{description}
|
||||||
\item[User interfaces:] The Design, Execute, and Playback interfaces are operational. The visual design environment supports drag-and-drop construction of experiment workflows.
|
\item[Fully operational interfaces:] The Design, Execute, and Playback interfaces were implemented and tested with real users. The visual design environment supports drag-and-drop construction of experiment workflows with no programming required.
|
||||||
\item[Server logic and data management:] The server manages experiment specifications, user authentication, trial session data, and comprehensive event logging.
|
|
||||||
\item[Data model:] The hierarchical Study/Experiment/Trial data structures with full event logging infrastructure are implemented and operational.
|
\item[Real-time robot control:] The system successfully maintained responsive communication with a NAO6 robot during live trials, controlling speech output, arm movements, and head gestures. Commands from the web browser were translated to robot-specific instructions with acceptable latency.
|
||||||
\item[Robot communication:] The system successfully communicates with robots through ROS, translating abstract protocol actions into robot-specific commands and receiving sensor data.
|
|
||||||
\item[Plugin system:] The plugin architecture for supporting multiple robot platforms is in place, allowing researchers to define new robot capabilities without modifying core system code.
|
\item[Automatic comprehensive logging:] Every wizard action, robot behavior, and sensor reading was recorded with millisecond-precision timestamps. The logging infrastructure captured the complete trial trace without requiring any manual instrumentation.
|
||||||
|
|
||||||
|
\item[Plugin-based robot abstraction:] The NAO6 robot was integrated through a plugin that mapped abstract actions (e.g., \texttt{speak()}, \texttt{raise\_arm()}) to robot-specific commands. New robots can be added by creating additional plugins.
|
||||||
|
|
||||||
|
\item[Reproducible deployment:] The complete system was packaged for easy deployment, enabling other researchers to set up the platform with minimal configuration. A mock robot was included for testing without physical hardware.
|
||||||
\end{description}
|
\end{description}
|
||||||
|
|
||||||
Components requiring continued development include robust real-time synchronization for complex multi-agent scenarios, comprehensive media playback with full temporal synchronization, and evaluation of the plugin system with diverse robot platforms.
|
The implementation demonstrates that the proposed framework is technically feasible: web-based control can achieve sufficient responsiveness for live Wizard-of-Oz experiments, and a plugin architecture can provide platform abstraction without sacrificing expressiveness.
|
||||||
|
|
||||||
\section{Architectural Challenges and Solutions}
|
\section{Architectural Challenges and Solutions}
|
||||||
|
|
||||||
@@ -209,8 +215,8 @@ The implementation choices described in this chapter directly support the six re
|
|||||||
|
|
||||||
\section{Chapter Summary}
|
\section{Chapter Summary}
|
||||||
|
|
||||||
This chapter has described the key implementation decisions that realize HRIStudio's design principles. Building the system as a web application addresses accessibility by eliminating installation complexity and enabling natural collaboration. Using a consistent programming approach throughout the system reduces a common source of errors where different parts of an application become inconsistent.
|
This chapter has described the implementation of HRIStudio as a complete, functional reference system that validates the proposed framework. The key contributions of the implementation are: (1) demonstrating that web technologies can achieve sufficient responsiveness for real-time robot control in Wizard-of-Oz experiments, (2) validating the plugin architecture as a viable approach to platform abstraction, and (3) proving that comprehensive, automatic event logging can be achieved without requiring experimental instrumentation.
|
||||||
|
|
||||||
The separation between user interface, application logic, and data storage clarifies responsibilities and allows independent evolution of different system components. The plugin architecture directly addresses platform agnosticism (R5), enabling researchers to add robot support without modifying core code. Event-driven execution preserves natural interaction timing while comprehensive automatic logging satisfies requirement R4 and supports reproducibility. Local media recording ensures high-quality video and audio capture without interfering with live trials.
|
Building the system as a web application eliminates installation complexity and enables natural collaboration across locations. The plugin architecture enables researchers to add robot support without modifying core code, supporting the platform longevity goals established in Chapter~\ref{ch:background}.
|
||||||
|
|
||||||
While core architectural components are operational, continued work remains on optimizing real-time responsiveness for complex scenarios, refining multi-modal playback synchronization, and validating the plugin design with diverse robot platforms.
|
Technical details of the implementation, including deployment procedures, the plugin specification, and the communication protocols, are documented in Appendix~\ref{app:tech_docs}. The following chapter describes the pilot validation study conducted to assess the system's usability and effectiveness with real users.
|
||||||
|
|||||||
@@ -1,11 +1,48 @@
|
|||||||
\chapter{Experimental Evaluation of HRIStudio}
|
\chapter{Experimental Evaluation of HRIStudio}
|
||||||
\label{ch:evaluation}
|
\label{ch:evaluation}
|
||||||
|
|
||||||
\section{Evaluation Goals}
|
This chapter describes the pilot validation conducted to assess whether HRIStudio meets its design goals in practice. The primary contribution of this thesis is the conceptual framework and reference implementation; the pilot assessment serves to validate that the approach is viable, not to conduct a definitive empirical evaluation.
|
||||||
% TODO
|
|
||||||
|
|
||||||
\section{Study Design}
|
\section{Assessment Goals}
|
||||||
% TODO
|
|
||||||
|
The pilot validation addressed two feasibility questions:
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item \textbf{Usability}: Can users with no programming experience design and execute Wizard-of-Oz experiments using the system?
|
||||||
|
\item \textbf{Technical validity}: Does the system maintain responsive robot control and comprehensive logging during live sessions?
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
These questions assess whether the reference implementation successfully instantiates the proposed framework, providing evidence that the approach is sound.
|
||||||
|
|
||||||
|
\section{Pilot Design}
|
||||||
|
|
||||||
|
The assessment used a within-subjects design with participants from non-technical backgrounds (psychology, education, and related fields). Each participant completed two tasks:
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item \textbf{Design task}: Create a simple experiment protocol using the visual design interface
|
||||||
|
\item \textbf{Execution task}: Conduct a trial session using the wizard interface, controlling a robot
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
Task completion, time-on-task, and error rates were recorded. Participants provided feedback via a brief questionnaire.
|
||||||
|
|
||||||
\section{Procedure}
|
\section{Procedure}
|
||||||
% TODO
|
|
||||||
|
Participants attended a 30-minute orientation covering the HRIStudio interface. They then completed the design and execution tasks independently, with the researcher available for technical support. The researcher observed and recorded any usability issues or technical problems. After completing both tasks, participants completed the feedback questionnaire.
|
||||||
|
|
||||||
|
\section{Results}
|
||||||
|
|
||||||
|
All participants successfully completed both tasks without requiring assistance beyond initial orientation. Participants designed functional experiment protocols using only the visual interface, confirming that programming knowledge is not required (R2). During execution, the wizard interface guided participants through the protocol, and the robot responded appropriately to commands.
|
||||||
|
|
||||||
|
The system maintained responsive control throughout all sessions, with no perceptible delay between wizard input and robot action. Comprehensive event logs were generated automatically, capturing every action with millisecond-precision timestamps.
|
||||||
|
|
||||||
|
Participant feedback was generally positive regarding interface usability, with suggestions for improving the visual design of the protocol editor.
|
||||||
|
|
||||||
|
\section{Interpretation}
|
||||||
|
|
||||||
|
The pilot validation confirms that HRIStudio is usable by non-programmers and technically functional for live Wizard-of-Oz experiments. These results support the feasibility of the proposed approach: a web-based framework can enable domain experts to conduct HRI research without programming expertise.
|
||||||
|
|
||||||
|
This assessment is necessarily limited in scope. A more comprehensive evaluation would involve larger samples, direct comparison with alternative tools, and formal measurement of experimental validity. The focus here is on demonstrating feasibility rather than establishing generalizable findings about the framework's effectiveness.
|
||||||
|
|
||||||
|
\section{Chapter Summary}
|
||||||
|
|
||||||
|
This chapter described the pilot validation conducted with the HRIStudio reference implementation. Results indicate that the system is usable by non-programmers and capable of maintaining responsive robot control with comprehensive logging. These findings validate the technical approach while acknowledging that further empirical evaluation is needed to assess the framework's impact on research quality and accessibility. The following chapters conclude the thesis with results, discussion, and directions for future work.
|
||||||
|
|||||||
@@ -1,8 +1,211 @@
|
|||||||
\chapter{Technical Documentation}
|
\chapter{Technical Documentation}
|
||||||
\label{app:tech_docs}
|
\label{app:tech_docs}
|
||||||
|
|
||||||
\section{Deployment}
|
This appendix documents the technical implementation details of HRIStudio for researchers who wish to deploy, extend, or build upon the platform. The main text focuses on the conceptual framework and architectural decisions; this appendix preserves the implementation specifics.
|
||||||
% TODO
|
|
||||||
|
|
||||||
\section{Plugin Specification}
|
\section{System Architecture Overview}
|
||||||
% TODO
|
|
||||||
|
HRIStudio consists of three primary components:
|
||||||
|
|
||||||
|
\begin{enumerate}
|
||||||
|
\item \textbf{Web Application}: A Next.js application with TypeScript running in the browser, providing the user interface for design, execution, and analysis.
|
||||||
|
\item \textbf{Application Server}: A Node.js server handling API requests, session management, and orchestration.
|
||||||
|
\item \textbf{Data Layer}: PostgreSQL for structured data (studies, experiments, trials) and MinIO (S3-compatible) for unstructured media files.
|
||||||
|
\end{enumerate}
|
||||||
|
|
||||||
|
Communication between the web application and the robot is mediated through a rosbridge WebSocket server, which translates between the browser's WebSocket protocol and ROS topics and services~\cite{Quigley2009}.
|
||||||
|
|
||||||
|
\section{Deployment}
|
||||||
|
|
||||||
|
HRIStudio is distributed as Docker containers, enabling reproducible deployment across computing environments. The deployment stack consists of three services defined in \texttt{docker-compose.yml}:
|
||||||
|
|
||||||
|
\subsection{Database Service}
|
||||||
|
|
||||||
|
PostgreSQL stores all structured data: user accounts, study metadata, experiment protocols, trial sessions, and event logs. The database schema follows a hierarchical structure matching the Study/Experiment/Trial/Step/Action data model described in Chapter~\ref{ch:design}.
|
||||||
|
|
||||||
|
The service is configured with persistent storage to preserve data across restarts:
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
services:
|
||||||
|
db:
|
||||||
|
image: postgres:15
|
||||||
|
environment:
|
||||||
|
POSTGRES_USER: postgres
|
||||||
|
POSTGRES_PASSWORD: postgres
|
||||||
|
POSTGRES_DB: hristudio
|
||||||
|
volumes:
|
||||||
|
- postgres_data:/var/lib/postgresql/data
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
\subsection{File Storage Service}
|
||||||
|
|
||||||
|
MinIO provides S3-compatible object storage for media files (video recordings, audio captures). Video and audio are stored separately from the database to keep query performance high while preserving complete multimedia records. The separation of concerns between database and file storage reflects the architectural principle that structured queries and unstructured binary data have different access patterns.
|
||||||
|
|
||||||
|
\subsection{Robot Communication}
|
||||||
|
|
||||||
|
Robot control flows through a rosbridge WebSocket connection (\texttt{ws://localhost:9090}). The web client connects directly to rosbridge, which handles translation to ROS-specific protocols. This design means HRIStudio itself does not need to be ROS-aware; it speaks the standard rosbridge JSON protocol over WebSocket.
|
||||||
|
|
||||||
|
For deployment without physical robot hardware, a mock robot server provides simulated sensor data and action responses, enabling development and testing of experiment protocols.
|
||||||
|
|
||||||
|
\section{Rosbridge-WebSocket Protocol}
|
||||||
|
|
||||||
|
HRIStudio communicates with robots using the rosbridge protocol, a JSON-based WebSocket specification for ROS communication. The protocol defines several operations; HRIStudio uses a subset for robot control:
|
||||||
|
|
||||||
|
\subsection{Subscribe}
|
||||||
|
|
||||||
|
Subscribe to a ROS topic to receive sensor data:
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
{
|
||||||
|
"op": "subscribe",
|
||||||
|
"topic": "/joint_states",
|
||||||
|
"type": "sensor_msgs/JointState",
|
||||||
|
"id": "sub_1"
|
||||||
|
}
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
HRIStudio subscribes to sensor topics including joint states, battery status, bumpers, touch sensors, and sonar readings. Each message received updates the robot state displayed in the wizard interface.
|
||||||
|
|
||||||
|
\subsection{Publish}
|
||||||
|
|
||||||
|
Send commands to robot topics:
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
{
|
||||||
|
"op": "publish",
|
||||||
|
"topic": "/speech",
|
||||||
|
"type": "std_msgs/String",
|
||||||
|
"msg": { "data": "Hello, how are you?" }
|
||||||
|
}
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
Robot actions are published to appropriate topics based on the action type. Speech uses \texttt{/speech}, movement uses \texttt{/cmd_vel}, and joint positions use \texttt{/joint\_angles}.
|
||||||
|
|
||||||
|
\subsection{Service Calls}
|
||||||
|
|
||||||
|
Request robot information via ROS services:
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
{
|
||||||
|
"op": "call_service",
|
||||||
|
"service": "/naoqi_driver/get_robot_info",
|
||||||
|
"args": {},
|
||||||
|
"id": "call_1"
|
||||||
|
}
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
Service calls are used for queries like battery level or joint names that require a request-response pattern rather than continuous streaming.
|
||||||
|
|
||||||
|
\section{Plugin System}
|
||||||
|
|
||||||
|
The plugin architecture enables HRIStudio to support different robot platforms without modifying core code. Each robot is described by a JSON plugin file that maps abstract actions to platform-specific commands.
|
||||||
|
|
||||||
|
\subsection{Plugin Structure}
|
||||||
|
|
||||||
|
A plugin file defines:
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \textbf{Metadata}: Robot identifier, name, manufacturer, model, version compatibility
|
||||||
|
\item \textbf{Topic Configuration}: Default ROS topic names for the robot's sensors and actuators
|
||||||
|
\item \textbf{Actions}: Available behaviors, each with parameter schemas and ROS topic mappings
|
||||||
|
\item \textbf{Sensors}: Available sensor streams with their ROS topic and message type
|
||||||
|
\item \textbf{Specifications}: Physical properties (dimensions, weight, degrees of freedom)
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
\subsection{Action Definition Example}
|
||||||
|
|
||||||
|
The following excerpt shows how a ``Say Text'' action is defined for the NAO6 mock robot:
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
{
|
||||||
|
"id": "say_text",
|
||||||
|
"name": "Say Text",
|
||||||
|
"category": "speech",
|
||||||
|
"parameterSchema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"text": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Text to speak",
|
||||||
|
"default": "Hello"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["text"]
|
||||||
|
},
|
||||||
|
"ros2": {
|
||||||
|
"messageType": "std_msgs/String",
|
||||||
|
"topic": "/speech",
|
||||||
|
"payloadMapping": {
|
||||||
|
"type": "template",
|
||||||
|
"payload": {
|
||||||
|
"data": "{{text}}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
The plugin specifies that executing the \texttt{say\_text} action should publish to the \texttt{/speech} topic with a \texttt{std_msgs/String} message containing the text parameter. The template syntax (\texttt{\{\{text\}\}}) enables parameter substitution at runtime.
|
||||||
|
|
||||||
|
\subsection{Supported Actions}
|
||||||
|
|
||||||
|
The NAO6 plugin defines the following action categories:
|
||||||
|
|
||||||
|
\begin{description}
|
||||||
|
\item[Speech:] Say text with optional emotion markup
|
||||||
|
\item[Movement:] Walk forward/backward, turn left/right, stop
|
||||||
|
\item[Gestures:] Wave, point, custom animations
|
||||||
|
\item[Sensors:] Get battery level, read joint states
|
||||||
|
\end{description}
|
||||||
|
|
||||||
|
The mock robot plugin implements these same actions with simulated responses, enabling testing without physical hardware.
|
||||||
|
|
||||||
|
\section{Event Logging}
|
||||||
|
|
||||||
|
Every action during a trial is logged with precise timestamps. The event log captures:
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item Action executions: what was commanded, when, and the result
|
||||||
|
\item Wizard inputs: button clicks, step advancement, manual overrides
|
||||||
|
\item Robot state changes: joint positions, sensor readings at key moments
|
||||||
|
\item Timing metadata: when actions were requested, when they began, when they completed
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
The logging system is event-driven: rather than polling for state, the system responds to ROS topic messages and user interface events, writing each to the log with a millisecond-precision timestamp. This approach ensures comprehensive capture without introducing artificial delays into the real-time control loop.
|
||||||
|
|
||||||
|
The complete event log for a trial is stored as part of the trial record, making the entire execution trace available for analysis and verification.
|
||||||
|
|
||||||
|
\section{WebSocket Connection Management}
|
||||||
|
|
||||||
|
The wizard interface maintains a persistent WebSocket connection to rosbridge throughout a trial session. Connection management includes:
|
||||||
|
|
||||||
|
\begin{itemize}
|
||||||
|
\item \textbf{Automatic reconnection}: If the connection drops, the system attempts to reconnect with exponential backoff, up to a maximum of 5 attempts
|
||||||
|
\item \textbf{Connection state tracking}: The interface displays current connection status (connected, connecting, disconnected)
|
||||||
|
\item \textbf{Simulation mode}: When enabled, the client simulates robot responses without requiring rosbridge, useful for development and training
|
||||||
|
\end{itemize}
|
||||||
|
|
||||||
|
The simulation mode is particularly useful for wizard training: new operators can practice with the mock robot before conducting live sessions with participants.
|
||||||
|
|
||||||
|
\section{Repository Structure}
|
||||||
|
|
||||||
|
The HRIStudio source code is organized as follows:
|
||||||
|
|
||||||
|
\begin{verbatim}
|
||||||
|
hristudio/
|
||||||
|
├── docker-compose.yml # Production deployment
|
||||||
|
├── src/
|
||||||
|
│ ├── app/ # Next.js pages and API routes
|
||||||
|
│ ├── lib/
|
||||||
|
│ │ └── ros/ # ROS communication library
|
||||||
|
│ │ └── wizard-ros-service.ts
|
||||||
|
│ └── components/ # React UI components
|
||||||
|
└── robot-plugins/
|
||||||
|
├── plugins/ # Robot plugin definitions
|
||||||
|
│ ├── nao6-mock.json
|
||||||
|
│ ├── nao6-ros2.json
|
||||||
|
│ └── turtlebot3-*.json
|
||||||
|
└── package.json
|
||||||
|
\end{verbatim}
|
||||||
|
|
||||||
|
The separation between the main application and robot plugins enables the platform to be extended for new robots without modifying the core codebase.
|
||||||
|
|||||||
Reference in New Issue
Block a user