{"id":11670,"date":"2025-10-27T17:39:43","date_gmt":"2025-10-27T16:39:43","guid":{"rendered":"https:\/\/dronesonen.usn.no\/?p=11670"},"modified":"2025-12-08T17:11:52","modified_gmt":"2025-12-08T16:11:52","slug":"astrorover-week-10-tony-stark-at-home","status":"publish","type":"post","link":"https:\/\/dronesonen.usn.no\/?p=11670","title":{"rendered":"ASTROROVER &#8211; WEEK 10 &#8211; TONY STARK at Home"},"content":{"rendered":"\n<pre class=\"wp-block-code\"><code>ign topic -i \/model\/vehicle_blue\/odometry<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">August:<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Implementing IMU for Autonomous Car<\/h3>\n\n\n\n<p>Finally, everything is ready! This week will be like shark week on Discovery Channel, just instead of sharks, there will be autonomous cars. This will be frickin awesome. I have done a lot of research. I will use this github repo: <a href=\"https:\/\/github.com\/adityakamath\/sensehat_ros\">https:\/\/github.com\/adityakamath\/sensehat_ros<\/a>. This have everything we will need. We will use the IMU later for mapping. Sander is going strong on making the LiDAR work, and we will collaborate later on to make the car fully autonomous. The Ultrasonic Sensor is more or less ready. For the autonomous driving we envision, we need IMU data and LiDAR. The car will after future iterations do mapping and A*.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">IMU &#8211; Iteration<\/h3>\n\n\n\n<p>The goal of this iterations is to get IMU data visualized in RVIZ2 the RPI4 Sense HAT moves. With the github repo I mentioned above, we will get the ROS2 node and topic to be able to do this. We will see if it is plug and play. Time will tell. <\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"749\" height=\"189\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio6.png\" alt=\"\" class=\"wp-image-12028\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio6.png 749w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio6-300x76.png 300w\" sizes=\"auto, (max-width: 749px) 100vw, 749px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"745\" height=\"185\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio1.png\" alt=\"\" class=\"wp-image-11800\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio1.png 745w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio1-300x74.png 300w\" sizes=\"auto, (max-width: 745px) 100vw, 745px\" \/><\/figure>\n\n\n\n<p>I started of course by cloning the repo in \/ros2_ws\/src<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>git clone https:\/\/github.com\/adityakamath\/sensehat_ros.git<\/code><\/pre>\n\n\n\n<p>I then built the workspace and sourced the files. The code uses clpy.qos.qos_profile_sensor_data to be able to deal with high frequent sensor voltage.<\/p>\n\n\n\n<p>I then launched the node:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>ros2 launch sensehat_ros sensehat_launch.py<\/code><\/pre>\n\n\n\n<p>I checked the ros2 topic list, and the topic was up. I veryfied that the node was publishing, by doing ros2 topic echo \/IMU. Everything worked well. I then wanted to try to see the visualization in rviz2. I Opened rviz2, and added \/IMU, but nothing happend. The terminal said:<br><\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"672\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-164-1024x672.png\" alt=\"\" class=\"wp-image-11774\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-164-1024x672.png 1024w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-164-300x197.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-164-768x504.png 768w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-164.png 1151w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>This happend because the default reliability policy was set to reliable, but the code demanded best effort. After fixing this, I was challenged by the next task. I was in the wrong fixed frame, and because of this, I recieved this message in the terminal:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"740\" height=\"484\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-156.png\" alt=\"\" class=\"wp-image-11724\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-156.png 740w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-156-300x196.png 300w\" sizes=\"auto, (max-width: 740px) 100vw, 740px\" \/><\/figure>\n\n\n\n<p>After realizing this i tried to move around the RPi4. Nothing happend. When i pitched, roled og yawned the RPi4, nothing happend. This was fixed by:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>ros2 run tf2_ros static_transform_publisher 0 0 0 0 0 0 world sensehat_frame\n<\/code><\/pre>\n\n\n\n<p>Without this, there will be no movement when trying to visualize IMU in rviz2. This took some time to research, but find this: <a href=\"https:\/\/docs.vulcanexus.org\/en\/jazzy\/ros2_documentation\/source\/Tutorials\/Intermediate\/Tf2\/Writing-A-Tf2-Static-Broadcaster-Py.html\">https:\/\/docs.vulcanexus.org\/en\/jazzy\/ros2_documentation\/source\/Tutorials\/Intermediate\/Tf2\/Writing-A-Tf2-Static-Broadcaster-Py.html<\/a>.<\/p>\n\n\n\n<p>When I did this, I had movement! Wohooo! This was a milestone, because getting here took quite some time. The problem here, however, was that the z-axis was turned upside down. I realized that it was becausethe tf2 was set 0 0 0 0 0. I had to type 0 0 0 0 3.14159 0. After doing this the axis was the right way. The best one was ros2 run tf2_ros static_transform_publisher 0 0 0 3.14159 3.14159 0 world sensehat_frame<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"553\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-155-1024x553.png\" alt=\"\" class=\"wp-image-11722\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-155-1024x553.png 1024w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-155-300x162.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-155-768x415.png 768w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-155-1536x830.png 1536w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-155.png 1873w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Now my display looked like this:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"344\" height=\"484\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-157.png\" alt=\"\" class=\"wp-image-11726\" style=\"width:289px;height:auto\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-157.png 344w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-157-213x300.png 213w\" sizes=\"auto, (max-width: 344px) 100vw, 344px\" \/><\/figure>\n\n\n\n<p>Now, the next problem was that the y-axis yawed, and the z-axis rolled. I had to go in to code, and change the order of which they where implemented. By changing yaw and roll, it was corrected.<\/p>\n\n\n\n<p>The next issue was that the y-axis roled the opposite way of how i roled the RPi4. I had to go into the code, and remove the &#8211; in front of the y. After doing this everything worked.<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/Screencast-from-20.-okt.-2025-kl.-21.51-0200.webm\"><\/video><\/figure>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/trim.F42BD930-2B38-44E9-AC62-F26BC87E8ABA1.mov\"><\/video><\/figure>\n\n\n\n<p>Done with the  IMU data, and visualize it in rviz2! By completing this, I felt like Tony Stark. If Tony Stark had no charisma, girls, money and less technical skills. This is a great step on the way to mapping and A*!<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Making the Gui Great Again &#8211; MOST INSANE GUI ITERATION SO FAR<\/h4>\n\n\n\n<p>Its time to get everything tied together in the GUI. At least the data from the Arduino, the IR ZeroCam and the \/driving_mode topic. I wanted to really optimize everything. I have done a lot of research, searched for hours. The results? Satisfying!<\/p>\n\n\n\n<p>I removed a lot of the placeholders from last week. I want it more clean, and that the data displayed is useful. I also want the GUI more functional, and that it is easy to see if the nodes are connected or disconnected. This is the most clean the gui has ever been. I removed all unnecessary complexity. I gave it my all, and this is how the design looks like (for now):<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"678\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-250-1024x678.png\" alt=\"\" class=\"wp-image-12031\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-250-1024x678.png 1024w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-250-300x199.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-250-768x509.png 768w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-250.png 1288w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"678\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-251-1024x678.png\" alt=\"\" class=\"wp-image-12032\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-251-1024x678.png 1024w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-251-300x199.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-251-768x509.png 768w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-251.png 1288w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>I started her: <a href=\"https:\/\/docs.ros.org\/en\/foxy\/Tutorials\/Beginner-Client-Libraries\/Writing-A-Simple-Py-Publisher-And-Subscriber.html?utm_source=chatgpt.com\">https:\/\/docs.ros.org\/en\/foxy\/Tutorials\/Beginner-Client-Libraries\/Writing-A-Simple-Py-Publisher-And-Subscriber.html?utm_source=chatgpt.com<\/a>. And: <a href=\"https:\/\/docs.ros2.org\/foxy\/api\/std_msgs\/msg\/String.html?utm_source=chatgpt.com\">https:\/\/docs.ros2.org\/foxy\/api\/std_msgs\/msg\/String.html?utm_source=chatgpt.com<\/a>. And of course: <a href=\"https:\/\/docs.ros.org\/en\/humble\/Tutorials\/Beginner-Client-Libraries\/Writing-A-Simple-Py-Publisher-And-Subscriber.html?utm_source=chatgpt.com\">https:\/\/docs.ros.org\/en\/humble\/Tutorials\/Beginner-Client-Libraries\/Writing-A-Simple-Py-Publisher-And-Subscriber.html?utm_source=chatgpt.com<\/a>. If there is a god, this is him: <a href=\"https:\/\/github.com\/adityakamath\/sensehat_ros\/blob\/humble\/README.md\">https:\/\/github.com\/adityakamath\/sensehat_ros\/blob\/humble\/README.md<\/a>. <a href=\"https:\/\/docs.ros.org\/en\/foxy\/Concepts\/About-Executors.html\">https:\/\/docs.ros.org\/en\/foxy\/Concepts\/About-Executors.html<\/a><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Displaying in GUI<\/h4>\n\n\n\n<p>The first thing I wanted to start with was the pressure data provided by the Sense HAT. The reason why is because I just had validated that the \/pressure topic existed, and that the Sense_HAT_node published to it. This is the code I started to implement her, and I reused it later <a href=\"https:\/\/docs.ros.org\/en\/humble\/Tutorials\/Beginner-Client-Libraries\/Writing-A-Simple-Py-Publisher-And-Subscriber.html?utm_source=chatgpt.com#write-the-subscriber-node\">https:\/\/docs.ros.org\/en\/humble\/Tutorials\/Beginner-Client-Libraries\/Writing-A-Simple-Py-Publisher-And-Subscriber.html?utm_source=chatgpt.com#write-the-subscriber-node<\/a>, <a href=\"https:\/\/gist.github.com\/robosam2003\/9e8cb1d8581ddd3af098a8813c64e71e\">https:\/\/gist.github.com\/robosam2003\/9e8cb1d8581ddd3af098a8813c64e71e, <\/a><a href=\"https:\/\/github.com\/tasada038\/pyqt_ros2_app\/blob\/master\/README.md.\">https:\/\/github.com\/tasada038\/pyqt_ros2_app\/blob\/master\/README.md.<\/a> (I have used more sites as well, they are all listed in the code):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>pressure_signal = pyqtSignal(float)\n\nself.pressure_signal.connect(self.update_pressure_label)\n\ndef update_pressure_label(self, value):\n        self.lblPressuseSenseHAT.setText(f\"Pressure: {value:.1f} hPa\")\n\nself.pressure_subscription = self.create_subscription(\n            FluidPressure,\n            '\/pressure',\n            self.pressure_callback,\n            qos_profile=qos_profile_sensor_data\n        )\n\ndef pressure_callback(self, msg: FluidPressure):\n  pressure_hpa = round(msg.fluid_pressure \/ 100.0,1)\n  self.get_logger().info(f\"Pressure:{pressure_hpa} hpa\")\n  self.interface.update_pressure_label(pressure_hpa)\n  self.interface.pressure_signal.emit(pressure_hpa)\n<\/code><\/pre>\n\n\n\n<p>The whole idea here is to have a GUI node that can manage and display data and components within in the sytem. I wanted to display the pressure data in the lblPressureSenseHAT, but I met big challenges:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"21\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-171-1024x21.png\" alt=\"\" class=\"wp-image-11886\" style=\"width:975px;height:auto\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-171-1024x21.png 1024w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-171-300x6.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-171-768x16.png 768w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-171-1536x32.png 1536w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-171.png 1774w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Thankfully this was similar to the challenge from getting the Sense HAT visualisation in rviz2. I had to switch policy. <a href=\"https:\/\/docs.ros.org\/en\/humble\/Concepts\/Intermediate\/About-Quality-of-Service-Settings.html\">https:\/\/docs.ros.org\/en\/humble\/Concepts\/Intermediate\/About-Quality-of-Service-Settings.html<\/a>. In the code aboq the qos_profil=qod_profile_sensor_data is the policy that was needed, I replaced 10 for that. Where 10 was the reliabilty policy, we now have the best effort policy.<\/p>\n\n\n\n<p>I have done some research on QT graphical view: <a href=\"https:\/\/www.pythonguis.com\/tutorials\/pyqt-qgraphics-vector-graphics\/?utm_source=chatgpt.com\">https:\/\/www.pythonguis.com\/tutorials\/pyqt-qgraphics-vector-graphics\/?utm_source=chatgpt.com<\/a>.<\/p>\n\n\n\n<p>Now I reused this code, and fixed the same for temperature, humidity and made a publisher.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Making the Camera a ROS2 Node<\/h3>\n\n\n\n<p> I did some research here, and discovered V4L2, <a href=\"https:\/\/docs.ros.org\/en\/humble\/p\/v4l2_camera\/\">https:\/\/docs.ros.org\/en\/humble\/p\/v4l2_camera\/<\/a>. I had to install and initalize the ROS&#8221; driver for V4L2:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo apt update\nsudo apt install ros-humble-v4l2-camera\n<\/code><\/pre>\n\n\n\n<p>Starting the node<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>ros2 run v4l2_camera_node<\/code><\/pre>\n\n\n\n<p>This publishes sensor_msgs\/Image on topic \/image_raw. I then tested if I had a video feed:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>ros2 run image_view image_view --ros-args image:=\/image_raw<\/code><\/pre>\n\n\n\n<p>This opened a window with a live-feed from the camera. At this moment i knew thtat the ROS2-topic worked, and I was a bit happy and relieved.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"751\" height=\"191\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio5.png\" alt=\"\" class=\"wp-image-12027\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio5.png 751w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio5-300x76.png 300w\" sizes=\"auto, (max-width: 751px) 100vw, 751px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"745\" height=\"185\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio4.png\" alt=\"\" class=\"wp-image-12025\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio4.png 745w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/smarte_system.drawio4-300x74.png 300w\" sizes=\"auto, (max-width: 745px) 100vw, 745px\" \/><\/figure>\n\n\n\n<p>I had to use rclpy for ROS&#8221;-communication<\/p>\n\n\n\n<p>cv_bridge + OpenCV in order to convert from sensor_msgs\/Image til QImage.<\/p>\n\n\n\n<p>And the same PyQt5 signal for secure threading of updates in the GUI.<\/p>\n\n\n\n<p>Screenshot: <a href=\"https:\/\/stackoverflow.com\/questions\/10381854\/how-to-create-screenshot-of-qwidget\">https:\/\/stackoverflow.com\/questions\/10381854\/how-to-create-screenshot-of-qwidget<\/a> bilder med pixmap<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Displaying Readings of RPi4 Processor Temperature<\/h4>\n\n\n\n<p>I did some research and found this: <a href=\"https:\/\/github.com\/Infinite-Echo\/ros2_temperature_tracker.git\">https:\/\/github.com\/Infinite-Echo\/ros2_temperature_tracker.git<\/a>. I then:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cd \/ros2_ws\/src\ngit clone https:\/\/github.com\/Infinite-Echo\/ros2_temperature_tracker.git\ncd ..\ncolcon build --symlink-install\nsource install\/setup.bash<\/code><\/pre>\n\n\n\n<p>I then struggled a bit to get it working. The problem, however, was that my RPi4 used a different zone name. On my RPi4 it is cpu-thermal. After changing this, the node published on \/cpu_temp fine! I then did the same process as listed inn pressure, and it got displayed in the GUI.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Driving_mode topic<\/h4>\n\n\n\n<p>This topic makes it possible to control and monitor the driving mode form the GUI in real-time via ROS2. <a href=\"https:\/\/docs.ros.org\/en\/humble\/Tutorials\/Beginner-Client-Libraries\/Writing-A-Simple-Py-Publisher-And-Subscriber.html?utm_source=chatgpt.com\">https:\/\/docs.ros.org\/en\/humble\/Tutorials\/Beginner-Client-Libraries\/Writing-A-Simple-Py-Publisher-And-Subscriber.html?utm_source=chatgpt.com<\/a>, <a href=\"https:\/\/gist.github.com\/robosam2003\/9e8cb1d8581ddd3af098a8813c64e71e\">https:\/\/gist.github.com\/robosam2003\/9e8cb1d8581ddd3af098a8813c64e71e<\/a>, <a href=\"https:\/\/www.pythonguis.com\/docs\/qpushbutton\/?utm_source=chatgpt.com\">https:\/\/www.pythonguis.com\/docs\/qpushbutton\/?utm_source=chatgpt.com<\/a>. The subscriptions was the same as the procedure in pressure. The buttons and the publishing node:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>self.driving_mode_publisher = self.create_publisher(\n            String,\n            '\/driving_mode',\n            10\n        )\n      self.btnControlDrivingMode.clicked.connect(self.set_joystick_mode)\n        self.btnAutonomousDrivingMode.clicked.connect(self.set_autonomous_mode)<\/code><\/pre>\n\n\n\n<h4 class=\"wp-block-heading\">Demonstration of GUI:<\/h4>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/Screencast-from-23.-okt.-2025-kl.-17.40-0200.webm\"><\/video><\/figure>\n\n\n\n<p>Everything I have started on this week is completed. Im happy with the results!<\/p>\n\n\n\n<p>This has been an extremely productive week. <\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Next week<\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li>GUI: Give the ros2 nodes in System Status, red color if they are disconnected or green if they are connected.<\/li>\n\n\n\n<li>GUI: I want to implement rviz2 in the GUI<\/li>\n\n\n\n<li>GUI: implement filepath with directorate fir pictures<\/li>\n\n\n\n<li>CAR: continue on the road to autonomous driving<\/li>\n\n\n\n<li>CAR: look at the physical components with Sander<\/li>\n\n\n\n<li>ARDUINO: implement fire module<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">Sander<\/h2>\n\n\n\n<p>This week I tested the new-made rover to see if it drove the way it should, and it looks like August and me mounted the wheels correctly as the mecanum driving characteristics is working flawlessly while using the joystick. There was some power problems when using the USB output of a laptop when running the car, probably due to the increased weight and resistance for the motors. This was solved by getting a 3M USB to USB extension cable so the DFR driver board could be powered from a wall outlet using the RPI OKDO 5V adapter. This resulted in the car being smooth and powerful with no issues. The longer cable also simplifies the testing of the rover, as we no longer have to walk by the rover with a laptop attached to it during testing. The RPi is powered by a standard 20,000mah USB power-bank, and only draws approximately 4,5W at 5.1V.<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/IMG_2940.mov\"><\/video><\/figure>\n\n\n\n<p>To get started on the autopilot node, I divided the lidar into 8 sectors to get 45 degree sectors, in the same way as the cars movement, Forward, Left-Right, Right etc. Created a simple print program to check if the directions was correct and the readings were good. This method provides a simple collision avoidance logic that uses rover_frame to take decisions based from the rovers perspective, and issues simple control commands to the micro-bit to turn the rover if a object appears in its path. This was more of a learning experience than it was relevant to our project, since we want to use SLAM to map and navigate using the global frame transformed from the car using IMU data, instead of the relative frame of the rover.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"632\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-252-1024x632.png\" alt=\"\" class=\"wp-image-12087\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-252-1024x632.png 1024w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-252-300x185.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-252-768x474.png 768w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-252.png 1508w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<pre class=\"wp-block-code\"><code>auto min_range = &#091;&amp;](size_t start, size_t end) {\n    auto begin = msg-&gt;ranges.begin() + start;\n    auto finish = msg-&gt;ranges.begin() + end;\n    auto it = std::min_element(begin, finish, &#091;](float a, float b) {\n        if (!std::isfinite(a)) return false;\n        if (!std::isfinite(b)) return true;\n        return a &lt; b;\n    });\n    return (it != finish) ? *it : std::numeric_limits&lt;float&gt;::infinity();\n};<\/code><\/pre>\n\n\n\n<p>This is a lambda function used to get the minimum values for the lidar readings it goes through the ranges[], from the laser_scan message. The begin and end parameters are used for checking different sectors independently. Its using std::isfinite() to filter out error readings. Finding the smallest valid reading.<\/p>\n\n\n\n<p>I also tried to get the slam_toolbox to work with the simulation in gazebo that I made last week but I didn&#8217;t get it to work. as im having some issues with the ros2 bridge that is supposed to publish data from gazebo to ros2. Next week I will probably just try to get it to work on the rover as I will get my hands on the sense hat. <\/p>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Sondre<\/h2>\n\n\n\n<h4 class=\"wp-block-heading\">Transition to Qt<\/h4>\n\n\n\n<p>This week started with downloading the necessary software for our \u201cDashboard,\u201d which will serve as the GUI for the ground station. The plan is to have a simple ground station with a touchscreen that displays the dashboard. I have also moved away from WSL and now work directly in Ubuntu due to a lot of issues with WSL.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">About Qt<\/h4>\n\n\n\n<p>Qt is a framework for creating applications with graphical interfaces in C++ and QML. The reason I chose Qt is because it gives us a fast and responsive GUI, has good support for the touchscreen I\u2019m using, and makes it easy to integrate with C++ code.<\/p>\n\n\n\n<p>Qt uses QML for the actual UI design that the user interacts with, and C++ classes for the underlying logic.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Qt and ROS2<\/h4>\n\n\n\n<p>The GUI itself functions as its own ROS2 node, which means it can subscribe to other topics and use the data directly. The goal is to eventually subscribe to the topics the rover publishes and display them on the screen.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Code Structure<\/h2>\n\n\n\n<p>The GUI project is organized in a simple and clean file structure where each \u201cpackage\u201d or any future ROS nodes can be created under \u201csrc\u201d. For now, I only have the dashboard, as you can see in the picture below. The dashboard program consists of the files located under \/dashboard\/src\/, which I briefly explain below.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"302\" height=\"299\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-300.png\" alt=\"\" class=\"wp-image-13236\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-300.png 302w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-300-300x297.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-300-150x150.png 150w\" sizes=\"auto, (max-width: 302px) 100vw, 302px\" \/><\/figure>\n\n\n\n<h5 class=\"wp-block-heading\">Main.cpp<\/h5>\n\n\n\n<p>Main.cpp starts the entire program. It initializes the Qt application, connects the ROS2 functionality, and exposes RosNode to the QML side so the interface can access the C++ logic. Finally, it opens main.qml, which defines the GUI layout.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">ros_node.hpp \/ ros_node.cpp<\/h5>\n\n\n\n<p>Here, RosNode is defined and implemented for the GUI to use. This class handles publishers, subscribers, callbacks, and runs ROS spinning in a separate thread so the GUI doesn\u2019t freeze. It acts as the bridge between sensor data from the rover and the GUI.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">main.qml<\/h5>\n\n\n\n<p>In this file, the actual layout of the GUI is defined. This is where the dashboard\u2019s appearance is designed. Here I create buttons, windows, and displays for sensor data.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">qml.qrc<\/h5>\n\n\n\n<p>This file registers and makes the QML files available to Qt. It allows the GUI to be packaged and run as a single application on the Raspberry Pi.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>The image below shows a first draft of the dashboard, where I have added some buttons that do not do anything yet. Eventually, I will also add a separate window that displays sensor data from the rover.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"644\" data-id=\"13237\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-301-1024x644.png\" alt=\"\" class=\"wp-image-13237\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-301-1024x644.png 1024w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-301-300x189.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-301-768x483.png 768w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-301-1536x966.png 1536w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-301.png 1623w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n\n\n\n<p>I also sketched a rough design of how the ground station should look. The plan is to bring this to Richard next week to finalize it and hopefully get it 3D-printed.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"945\" height=\"915\" src=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-302.png\" alt=\"\" class=\"wp-image-13238\" srcset=\"https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-302.png 945w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-302-300x290.png 300w, https:\/\/dronesonen.usn.no\/wp-content\/uploads\/2025\/10\/image-302-768x744.png 768w\" sizes=\"auto, (max-width: 945px) 100vw, 945px\" \/><\/figure>\n\n\n\n<h4 class=\"wp-block-heading\">Plan for Next Week:<\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Talk with Richard about the ground station and possible 3D printing<\/li>\n\n\n\n<li>Talk with Steven about the touchscreen and GPS<\/li>\n\n\n\n<li>Improve the GUI code<\/li>\n<\/ul>\n\n\n\n<p><strong>Oliver<\/strong><\/p>\n\n\n\n<p>This week, I unfortunately haven\u2019t been able to work much due to health-related reasons. The LiDAR belt has arrived, and I\u2019ve replaced it. I tested the code, but there are major calibration errors that I\u2019m working on fixing. However, I did manage to resolve other issues, such as missing data packets<\/p>\n","protected":false},"excerpt":{"rendered":"<p>August: Implementing IMU for Autonomous Car Finally, everything is ready! This week will be like shark week on Discovery Channel, just instead of sharks, there will be autonomous cars. This will be frickin awesome. I have done a lot of research. I will use this github repo: https:\/\/github.com\/adityakamath\/sensehat_ros. This have everything we will need. We [&hellip;]<\/p>\n","protected":false},"author":115,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-11670","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/posts\/11670","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/users\/115"}],"replies":[{"embeddable":true,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=11670"}],"version-history":[{"count":118,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/posts\/11670\/revisions"}],"predecessor-version":[{"id":13599,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=\/wp\/v2\/posts\/11670\/revisions\/13599"}],"wp:attachment":[{"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=11670"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=11670"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dronesonen.usn.no\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=11670"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}