August og Sander:
The GUI was already integrated with the HTTP server running on the Raspberry Pi. Through the REST API, it could fetch sensor values such as temperature, humidity, LED status and mode. It could also control the system modes (AUTO, ON, OFF, GUARDIAN) and manage the camera by turning it on or off, capturing snapshots, and displaying a live video stream. And logs everything.
The new development is that the GUI now also functions as a ROS2 node. It registers itself in the ROS2 graph under the name /ros2_gui_node and both publishes and subscribes on the topic /driving_mode of type std_msgs/msg/String.
When the user interacts with the GUI buttons for CONTROL or AUTONOMOUS mode, the GUI publishes the selected driving mode on this topic. At the same time, it subscribes to the same topic to update the label (lblDrivingMode) with the current status provided by other ROS2 nodes, for example a car simulation node.
This means that the GUI continues to operate with the HTTP server for hardware and camera control, while also participating in the ROS2 ecosystem. The result is a hybrid design where the GUI acts as a central hub, bridging HTTP-based IoT control with ROS2-based robotics communication.
Looking forward, the GUI has placeholders for future sensor integration. Sense HAT values (temperature, humidity, IMU) will be displayed alongside Arduino sensor data. Once the LiDAR is connected, its distance and mapping data can also be visualized in the GUI and shared as ROS2 topics. The architecture is ready to expand. Ros2 provides distributerd communications between nodes, making the system scalable for the LiDAR and Sense HAT.
Sondre
Most of this week was spent preparing for the re-sit exam. I also took some time to read through the OpenCV documentation to strengthen my understanding of how it works. i’ll come back stronger next week:)
