MicroMind Autonomous – Week 13 – Final Submission


Information:
Since this is our last submission, we will publish the final contributions so far this semester. In our GitHub repository you can view our total code, and see what we have managed to do this far in our semester.


GitHub repository:
https://github.com/Joffeey/MicroMind_Autonomous

MicroMouse Simulator:
https://github.com/mackorone/mms

Our MicroMouse Simulator files: https://github.com/Joffeey/MicroMind_Autonomous/tree/main/Simulation

Simulation video:
https://www.youtube.com/watch?v=iXtiFru1XSw&t=1s&ab_channel=Joffe


Ilir Bylykbashi
Summary:

As we approach the final week before submission, our focus has intensified to ensure the development of a fully operational robot capable of forward, backward, and sideways movement based on sensor inputs surrounding the Micro-Mouse. To prioritize programming tasks, John and I have identified key functionalities for our mouse.

This week, my primary focus has been on implementing a mechanism for the robot to determine the most effective way to rotate. This involves integrating a small device that provides precise readings of wheel rotations, ensuring accuracy in the collected data. To achieve this goal, we have opted to incorporate a hall sensor into our project.

To implement the hall sensor functionality, we’ve written code within our state machine to handle various scenarios that may arise during operation. Below is a brief snippet from our statemachine.cpp file, and the complete file is available in our GitHub folder. This effort is crucial as we strive to meet our submission deadline with a well-functioning and responsive robot.

In our programming approach, we’ve attempted to anticipate and address various scenarios by creating specific cases.

One notable example is the scenario where the robot is instructed to move forward. In this case, we have implemented a specific command for the robot to execute. The corresponding code snippet for this scenario involves setting both motors to drive. The relevant code is outlined below:

If the motor meets a wall, it will stop. This is our code to make it stop.

These are small codes, but what makes it more difficult is the ability to turn according to what is needed. This is our code for the movement to the left:

Right away, you can see that this code is larger than the others. Our aim is to create a flexible code that works smoothly with sensor data. We are using a hall sensor to keep track of the wheel position so the robot turns accurately, be it left or right.

The code is not final yet. We still need to make more changes to ensure the hall sensor works seamlessly with other sensors to understand how the robot moves. Our main focus is on nailing down the algorithm and integrating it with the robot.

For sections of the code that haven’t undergone testing with the robot, we’ve commented them out to avoid disrupting the functioning code. Presently, due to time constraints and the inability to test it with a functional robot, we’ve incorporated only one hall sensor.

Furthermore, John and I collaborated to simulate how our robot would operate with an

algorithm in place. This involved extensive work with the MicroMind Simulator. Given the current absence of a functional robot, our emphasis will be on crafting a simulation illustrating how we envision our robot navigating the maze in a perfect environment. The simulator will serve as the platform for executing and coding the desired behavior of our robot within the simulation.

Challenges this week:

This week posed significant challenges for us. Sorting everything out for the robot has thrown various obstacles our way. Dealing with bugs and losing some team members has slowed us down. We have tried to add what is necessary for the project to succeed, but it took more time than we initially thought.

John and I have dedicated a lot of time to the electrical engineering side, dealing with sensors and physical parts. Even though our main focus is on computer science, tackling these challenges was necessary for our team’s progress. 

Additionally, we have mainly worked with the computer engineering aspect of this project. This two-sided perspective has made it difficult to put all our efforts into one aspect, which is why we will try our best to get a running robot based on the time remaining. However if this is not possible, we will be ready to show the code which we would have implemented, and preferably be ready to showcase a simulation on the presentation date.


John Frederick Lærum
Summary:

This week, our main focus has been on completing as much as possible regarding radio communication and algorithm development for solving. Unfortunately, we won’t be able to demonstrate an algorithm that can be uploaded and run directly on our project. However, due to certain aspects taking more time than anticipated, we aim to showcase a simulated algorithm that could be implemented for maze-solving.

To get as much functionality and showcase our skills this last week, we moved focus from troubleshooting and working on hardware, to being more scoped in on the software development.

Working on simulation of an algorithm we’ve used the MMS: Micromouse simulator (https://github.com/mackorone/mms) and for inspiration we’ve looked at the project One2Remember posted on his Github (https://github.com/One2Remember/Micromouse).

As with a lot of things they tend to take more time than one imagines at first glance. We’ve had great progress on both the radio communication and the simulation of the algorithm, but it’s not finished. Upon posting this blog post chances are that the simulation is not complete. We are working hard to be able to showcase this in the blogpost, but if not then we will include it in the final presentation on Monday 29th.

While working on the simulation I found it helpful to create this chart to visualize how orientation of the mouse affects the different “sensors” and what they read.

The code for the simulation that we are writing is very large, so we are not able to input much information about the code here, but it will be inserted into our GitHub.

The simulation and its files will also be available on our github to see how we went about to implement the logic to solve mazes, but here is a small video on how the simulation is running based on the code Ilir and I have done so far.
https://www.youtube.com/watch?v=iXtiFru1XSw&t=1s&ab_channel=Joffe

In this video, we can see how the mouse will try to solve the maze. Throughout the run, it checks if there are any walls or not. It then queues the neighbor cell(s) that are possible to reach and inserts it into our queue system. When doing so, it checks if there are any walls there and does not queue the route if a wall is present.

After that, it goes through the queue moving forwards checking again and queueing. We will check the position of the mouse before – and after making a move, if the position is (7, 7) || (7, 8) || (8, 7) || (8, 8) it will announce that it has reached the goal and we will have to implement backtracking for it to eventually run that route as a speedrun. 

As we can see in our attempt, the algorithm is not yet finished. However, this attempt is not hardcoded, and the agent will move randomly and will be dependent on the maze layout. Upon running the simulation several times, it gives us different outcomes before going inside a loop where it is stuck. 

For debug purposes, we try to print out information on how we can make the code run properly, which we can see on the right side using this program. 

Below we are able to see a screenshot of an attempt of solving the maze, taken from the video linked above: 

Another attempt of solving the maze, with a different outcome while using the same code & maze:


Thomas Frimann

Summary: 

(Waiting submission)


Daniels Blomnieks

Summary:

(Waiting submission) 


Leave a Reply