Thursday, February 21, 2013

Robotics 2: First Demo

For my Robotics 2 class, my project is to have an ST Robotics R17 arm take a picture of you and draw it. Tomorrow is our first demonstration for our class, so as a first milestone, we fed it a picture of a circle, which it then scales and draws on paper. (See below for how it works.)



Unfortunately, the arm thinks in Forth, which is an ugly, archaic programming language from the 1970s which uses reverse Polish notation for basically everything. (If you've never heard of RPN, look it up on Wikipedia and prepare to chuckle.) We don't really want to do ALL of our work in Forth; but luckily, there exists a reasonably pleasant LabVIEW wrapper which allows us to pass Forth commands to the robot. That way, we can do all the thinking in a less stupid language.

First, the image is processed a little bit. We start off by reducing it to a binary image rather than one with all those confusing colors - this isn't very hard, since our test image is a black-and-white circle. We then find all the contours - also easy, since there's only of them - and approximate it as a whole bunch of points not too far from each other. Then we can just output to LabVIEW a whole bunch of (x,y) coordinates - the vertices we want to hit. These (x,y) coordinates are scaled so that we can draw on paper of the size we want rather than the size of our computer screen.

Here's where the Forth comes in, and it's a little ugly. We have to create what's called a "route" in Forth, which is just a whole bunch of points in a list. There's a shockingly large amount of obnoxious syntax that goes into this, but that's not very interesting to read about. Just know that it sucks.

Once the route has been made, we're ready to send commands to the robot. The arm calibrates and goes home so that it has somewhere to start, and turns its wrist down so that it's ready to write. Finally, we send it on the path we told it to go on. It automatically smooths out the corners, which is very polite of it. (Thank you kindly, Robot.) That's as far as we've gotten thus far, but expect more status updates later.

Next steps include: Having it draw more interesting shapes, having it detect multiple contours, allowing it to use different colors, and having it shade in or cross-hatch regions.

And would you believe my team thinks Robotticelli is a dumb name?