Welcome back to our blog post for the fourth week of our ToyzRgone smart system project. Below, you will find the contributions made by each of our team members either individually or in pairs:
Philip Dahl 🔋
Motor Control
This week I began testing the motor we are using for the wheels. The NEMA 17 stepper motor was a fitting upgrade from the DC motors we previously had on hand as it is more precise and can better handle the weight of the robot. To begin, I searched for a datasheet to find the amperage of the motor. As NEMA17 is quite commonly used, there are many different models with slightly different specifications, but most of them had an amp rating of between 1-2 A and operating voltage 12-24 V.
Mikolaj provided me with a BL-TB6560 motor driver and a power supply for testing. The driver had input voltage 10-34 V, an output of up to 3 A and could control one stepper motor.
I connected everything together and hooked up the power supply to the circuit. I used an example code from the Stepper library in Arduino to test but I could not get it to work. I tried with different amp outputs and eventually got the motor to make some noise, but not properly spin. With some more research I found that the wires, connected to the two coils within the motor, could often be “mislabeled” and was most likely the issue.
Before fully resolving the issue with the previous driver, Ruben provided me with a much smaller motor driver, the A4988, which could also control one stepper motor and had similar specifications that would fit testing. I connected a new circuit with this driver and tested again. This time the motor made some movement, but it was not very smooth and vibrated quite a bit. To check if the motor might be faulty, Ruben had another NEMA17 stepper on hand which I swapped out and tested. Now the movement was much more smooth and less noisy.
Mapping
As our robot will drive autonomously around the room, it will need to properly map the environment it is in. Our dearest computer engineers investigated solutions for this and found that a use of LiDAR with ROS would do the job. Hamsa got hold of a YDLiDAR(4X?) component and only needed some assistance to wire it correctly and proceed with programming.
Looking through the datasheets of a few similar looking models, I could not find one that had matching wires and soon found out that a motor driver was missing from the model we had received. I managed to identify the wires by taking the top off the LiDAR, but without a motor driver we could not control the motor that spins the sensor around. “All” that is needed is to get a separate driver and connect it to the motor, and then make sure everything works properly for Hamsa and Sokaina. Until I find the necessary materials, the electrical part of the mapping process is put on hold.
Hamsa Hashi 💻
Overcoming Obstacles
This week, I worked on several challenges with my Raspberry Pi, starting with connecting it to Eduroam. Eduroam’s WPA2-Enterprise security required advanced WPA Supplicant configuration, and I encountered missing CA certificates, which added to the difficulty. After trying various solutions, I temporarily connected the Pi to the router via Ethernet, but I still need to find a permanent Wi-Fi solution.
In addition to network issues, I faced challenges with library imports due to the ARM architecture on the Raspberry Pi. The manual installation of dependencies like h5py, along with compatibility problems in the Debian Bookworm distribution, caused errors. I resolved this by downgrading Python to version 3.11.10 since TensorFlow doesn’t support 3.12, which fixed the missing dependencies. I used a combination of pip
and conda
in virtual environments to manage all the libraries needed for now.
Training the dataset directly on the Raspberry Pi was extremely slow due to limited processing power, RAM, no GPU support, and disk speed bottlenecks. I’ve stopped that process and will move the training to Google Colab next week for faster processing, thanks to free access to powerful GPUs, easier setup, and better storage through Google Drive. Despite these challenges, I found some solutions and I’m continuing to make progress.
https://stackoverflow.com/questions/74205727/raspberrypi-importerror-unable-to-import-required-dependencies-numpy
Exploring ROS for Future Integration in Our Robot Project
I’ve recently explored ROS (Robot Operating System) to see how it could benefit our robot project, particularly in areas like autonomous navigation and object recognition. ROS offers potential flexibility by enabling communication with multiple sensors and cameras, and allowing for real-time updates and module adjustments.
This week, I spent time learning about ROS and its capabilities on Raspberry Pi. While I haven’t installed it yet, I plan to explore it further later, as its potential in object detection and navigation could be a significant advantage.
Our goal remains to use the Pi camera for gathering precise coordinates and detecting small colored balls, allowing the robot to analyze data in real-time, avoid obstacles, and navigate autonomously.
Next Week’s Plan
Looking ahead to next week, my plan is to work with Sokaina to finalize the dataset, at least to create a test file. We’ll use the dataset we found online and supplement it with some images we’ve taken ourselves to improve precision. Once that’s done, we’ll begin integrating the dataset into our system for testing. This should give us valuable insights into how well the model performs with real-world data and help refine the detection algorithms. I’m optimistic. Will provide an update on our progress next week.
Kevin Paulsen 🛠️
This week I made some improvements to the Mark 2 robotic arm, making it a debut for the newly develop of the Mark 3 model. The Mark 3 has been designed and fitted with the Dynamixel servos we intend to use, along with aesthetic improvements such as fillets and rounded edges to create a slimmer, more aerodynamic, and less bulky appearance.
The biggest change in the Mark 3 is the gripper. Instead of the two claws, which could have some trouble picking up round balls (which is important for our project), I went for a completely new design that features five claws. This change makes it much easier to grip the ball securely, so it’s less likely to drop once it’s grabbed. Even though the new gripper is bigger and more complicated, we could actually remove one of the servos, which helps reduce some weight and bulk at the tip of the arm since we don’t need the gripper to rotate anymore.
Plan for next week:
The next week I will try to figure out how I can fasten the gripper assembly to the rest of the arm, and see if I can trim down the base of the gripper to reduce moment force and unnecessary weight/size.
Mikolaj Szczeblewski 🔋
A lot has has changed from the last week, including the electrical engineers’ priorities and goals for the project. However more about that in depth later. Firstly I’d like to express pride and joy over this group, as for the last week I haven’t seen such inspiration invoked ever before.
Both mechanical and computer engineers have been fantastic in communication. All the time we’ve discussed alternative methods for either it was talk of software, hardware or the 3D models of our project. As someone who’s been named team leader, I feel an obligation to not only give feedback about the project itself, but also of how the group evolves over time. Now for the practical part:
Servos
This week I’ve been doing excessive research on the Dynamixel XM430-W210-T servos, they are equipped with contactless magnetic-encoders this allows for a 360 degrees rotation with up to 95RPM. It is also a strong servo for its size, allowing for a torque of up to 3,7N.m which can help with heavy loads. (Keep in mind this torque in particular is at the very maximum voltage applied 14,8V)
The servo communicates with either a TTL or Half Duplex UART transmission. For our purpose however we have decided to use the TTL transmission, given that we have obtained the OpenCM 9.04 board which has 4 TTL ports with 3 pins. (OpenCM 9.04 is a microcontroller board which uses an STM32F103CB processor). This board gives us the opportunity to daisy-chain the servos, which will help in testing the robotic arm’s DoF (degrees of freedom).
To actually test the servo on the board, I had to solder the ports on the board as the board doesn’t come with in a pre-soldered state. In addition to that, I was also working with Philip on his focus with the DC-motors which will be used for our mecanum wheels.
I didn’t exclude help for other disciplines, and every time I researched and found out I needed this component in particular, I simultaneously looked for github repositories and sent them over to the Computer engineers.
Plan for next week:
Next week, we are awaiting more components, 3 stepper motors, along with 4 rotary magnetic encoders which will provide a closed loop feedback system to the stepper motor. (To dumb it down, the stepper motor will constantly know its precisement in terms of its “microstepping” which are the very small rotations it can do.)
We need also a solution for the mecanum wheels and this will be among the focuses next week for the electrical and mechanical engineers to find a common understanding over, as our stepper motors have a shaft that does not fit within the mecanum wheels we have currently, and a solution will be needed for this next week.
Sokaina Cherkane 💻
The main focus this week has been training the imported dataset from Roboflow in order to allow the machine learning model (eg: CNN) to learn patterns from data in order to make predictions or perform the required task which, in this case, is detecting red, green, and blue medium-balls. The goal of supervised learning is to learn the patterns by analyzing the training data to identify patterns, trend, features, shape and any quality that assists the model of making a precise prediction. The more Epochs there are the higher chance the program has to detect the accurate object(balls), this will lower the error percentage during the detection process. During the training process, the model adjusts the internal parameters, such as weights and biases to minimize the difference between its predictions and the actual outputs. This program will be able to recognise specific features such as: shape, color and size of the ball(s). Whilst minimizing the error in identifying the object and its location (bounding box) during this training process. At the end of the training process, the model will be able to correctly detect and classify the balls as wanted. The balls we are working on have a diameter of d= 62mm, and are of color red, blue and green and naturally have a spherical shape.
After the importation of the wanted dataset, installation of the needed packages, and writing down a suitable code. I started the “training process”. As shown in the picture below.
The process ran pretty smoothly until it reached Epoch= 50/100 then it stopped on its own
It took me some time to figure out the occurrence of this error. There were 2 main possibilities/causes:
- Semaphore leaks, since this is a shared program the semaphore objects were automatically created due to the shared resource in the multiprocessing environment, this can occur whether i am using a normal or swap memory.
- Memory leaks, the program allocates memory but does not release it after use.
In this case, it turned out to be a memory leak, since the program consumes more memory until the system runs out.
Therefore the next step is to either use a swap memory to delay the symptoms of memory leaks because it provides extra virtual memory, this will ensure both more memory/storage and not cause semaphore leaks. It could, however, result in memory-intensive program that could worsen the underlying problems in the code that mishandles resources. or I could use an external hard drive by storing the dataset there, and saving intermediate results on the external drive. This will free up local storage, avoid memory swapping. It will also improve the training stability. We will also be using tensorflow as a machine learning framework to implement CNN (Convolutional Neural Network). We want to integrate CNN into the process of dataset training. This will make the detection of large images more efficient since CNNs automatically learn and extract important features from images such as: shapes, size and colors, etc.
TO BE DONE:
- I will make my own dataset from scratch (taking +100 pictures, locating it in an external hard drive).
- Train the new dataset.
- Do more research about CNN’s implementation.
Ruben Henriksen 🛠️
This week I started by creating the first prototype of the base for the robot. Cooperating with the electronics engineers we decided that we might want to use Nema 17 stepper motors. I realized that we need some larger mecanum wheels than the ones we got previously.
Next week
I have to check if there are some larger wheels available at USN, then continue the design and create a prototype.
To the left we have an image of the internals with a battery pack in the rear there should be ample space for the electronics inside. And to the right Here is and image of the initial design for the base, with a placeholder arm to assess the scale.