This robot had to navigate an obstacle course and manipulate objects. To do this I mounted line sensors on the base to follow a simulated course which lead to objects with a QR code on them. It would then have to use its arm to carry the object back along the path to a station where it would hold the sensor in a set position so that a QRL code reader could see it. The QRL reader would then use an LED to tell the robot if this object was "dangerous" or "safe" and the robot would have to place dangerous objects in one pile and safe ones in another.
The largest design issue for us when building this robot was ensuring that it could properly reach and work with the "sensors" while still being capable of fitting through some of the tighter areas in the course. To accomplish this we designed an arm operated through a series of chains that could fold in on itself. Additionally, we added encoders to each of the front wheels in order for us to accurately measure distances and turns that we would make. We also mounted the line sensors in the center gap, using one for the left side, one for the right side, and one directly on top of the lines expected location. This configuration of line sensors allowed for maximum accuracy when trying to determine which way the robot would need to turn at an intersection.
With the materials we had on hand we were forced to mount a motor at the wrist to close the gripper on the robot's arm. Unfortunately, this caused significant issues with the torque required to keep the arm fully extended. In order to solve this we had to dramatically change the gearing at the base of the arm. When I had tried raising the arm with only one gear the force actually caused the gear to slip considerably and to stop it I had to use two gears mounted adjacent to each other on the same axle in order to spread out the force enough to prevent this. The final sensor we used can be seen attached to the large 64 tooth gears. This was a potentiometer mounted to the axle that the arm rotated around. Using this we could read the output values and determine the arm's current location, using the starting point as a reference. By the time we had to run our robot it was able to complete the course, although due to technical limitations with the microprocessor we were forced to use the robot could only perform each task individually, not all of them in one run.