Week 13
After our realisation that the Raspberry Pi only supports a single camera, we went ahead and got ahold of another Raspberry Pi and connected them with an ethernet cable. Setting up network between them and some ROS settings allowed the ROS on each of the Pi’s to communicate with each other. We have also managed to get the raspicams working with ROS, so they output their image stream on ROS topics.
With camera output in place we worked on and tested how the ROS package find_object_2d performed for our task. For reasons we’re not quite sure the package performed pretty badly at any distance except approximately from where the object was added, so for our purposes (finding an object while exploring) this is not optimal, and we’ll spend some more time looking into it later.
Here we got the object detection to detect the raspberry logo
On the subject of our car, we have finally got it working with ROS as well. There was a lot of fiddling with CMake and one of the libraries, but we now have successfully ported our earlier Arduino code over to a ROS format, and changed the pins to Raspberry Pi pins instead of Arduino pins. To test our car controls we ran the turtlesim node which takes keyboard inputs (turtlesim being a part of the ROS tutorial), and controlled our car via it, allowing us to drive back and forth and steer using the arrows on the keyboard (it was pretty badass).
Since we had some problems with the Raspberry Pi’s freezing while calibrating the cameras, we connected one of our laptops and the Pi’s to a network switch, and connected the laptop to the same ROS network. Then we managed to calibrate the cameras on the computer while streaming the camera outputs from the Pi’s.
Week 14
With the cameras calibrated we’ve been working on getting the stereo camera setup to work with 3D mapping, but as it turns out this is actually a good bit of work. So far we’ve:
- Written a ROS node to synchronize the images outputted by the camera, together with the camera info. Turns out this wasn’t enough, and we had to synchronize those messages with the messages from the other camera. So we made our synchronizing node synchronize those as well.
- Fed the synchronized camera feed to a stereo image processing node, to generate messages needed by the mapping nodes.
- Recalibrated the cameras several times, changed tons of settings, on both the cameras and the stereo image processing.
At this point we’ve gotten quite close to being able to 3D map. We’ve gotten it to barely work, and it does map some things, but again just barely. One of our Pi’s is a Raspberry Pi 2B+, and we are suspecting that this may be part of the problem, as it’s getting a very limited framerate.
Working on the Pi’s was a bit of a hassle, we’ve moved all of our ROS development away from the Pi’s and to the laptop instead. And since the ROS development was involving multiple nodes, most of which have multiple input parameters, now over three machines, stopping and starting nodes and changing settings were a huge pain. To make it easier for ourselves we spent some time cleaning up and created a huge launch file which remotely launches the camera nodes on the Pi’s, redirects the topics and correctly names them, launches the synchronizing node, then launches the stereo image processing. The time it took to setup was well worth it, as it made everything a ton easier.
Two days in a row we also had some setbacks. First off our ROS refused to produce executables, this meant that all nodes we were working on refused to launch. After hours of troubleshooting and trying most things we ended up making a new catkin workspace, which luckily fixed it, even though we still don’t know what the cause for the problem was. The second setback was that one of our Pi’s suddenly died. After a day of working we had brought the Raspberry Pi’s home, and the next day when we started working again it was suddenly dead. Turns out one of our group members had a Raspberry Pi at home we could use, so we got back on track eventually.
While working on our car and after putting both the cameras on their camera mount on the car we realised that the claw and what was in its grip was outside the field of view for the camera. This meant that we had no way of knowing whether or not the object to be picked up was inside the arm or not, and that we’d be forced to make an assumption. To solve this we equipped the car with an ultrasonic sensor in front, so that the car would be able to tell the distance to the object it was about to pick up. We updated the code to support the sensor and made it so the arms would close when it was close enough. This code was also put on ROS, and we added a service to the node so that the node controlling the system would be able to control the arm and to tell it when it was ready to close.
Because we had a lot of problems with our 3d mapping we decided on a more crude technique for car guidance. We had already setup find_object_2d to detect the object we are going to find we decided to just use that to navigate the car. So it would not have any obstacle detection but it would still be able to find and pickup the object when we put it in front of it. Sadly because of our problem with 3d mapping we do not have sufficient odometry to have the car return to its original position but this is the best we can do. We did however manage to increase the performance and accuracy of find object 2d by drastically increasing max features it can find in on frame.
Image showing our ROS node structure
To show the coupling system we chose to use the program Fritzing. This program has visual figures of components used in most projects like this. After finding the components, we had to connect them in a breadboard to have common ground and voltage.
The components we used:
- Raspberry pi x2
- Power banks x2
- Motor Controller L298N
- DC motors x2
- Ultrasonic sensor
- 28BYJ-48 Stepper Motor
- Uln2003a stepper motor driver
- 9v battery
- Raspberry Pi Mini Camera x2
The use of the cameras are for 3D mapping and object recognition. Stepper motor and driver controls the arm / grip. The ultrasonic sensor is for determining when to use the arm and grab the object. The DC motors and the L298N component are for the wheel drive.
In relation to our goals for this project we did not manage to achieve all the A requirements. The main reason for this was because we had to learn a lot of our tools from scratch such as ROS and Cmake, while not being very versatile in linux as an operating system. This led to a lot of fiddling with our tools to get it working and each feature became an uphill battle to achieve it. While we did not belive our goals to be easy, especially the 3d mapping part we did underestimate how hard it would be to have the cameras properly calibrated and configured to produce good depth maps.
We did however learn a lot as the project went on, the flipside of not knowing your tools is the fact that you have to learn them very quickly as the project develops. And this is what we did so by the end we where much more rapid with out idea -> working feature cycles. Problems that might have taken us a week to debug now only takes days or even hours.
Github link:
https://github.com/henrik9864/ProjectFetch