Main Content

It uses a 3D printed arm from free resources with my code to recognise the human moves and control the arm

I wanted to build a chess robot that could play and beat me. I had previously made one using a commercial kit (AL5D) but it is quite expensive. And so I decided to 3D-print a robot and rewrite my code for it.

It works like this:

The human, playing white, makes a move. This is detected by the visual recognition system. The robot then ponders and then makes its move.

And so on…

Perhaps the most novel thing in this robot is the code for move recognition.

Because the human’s move is recognised by a vision system, no special chess board hardware (such as reed switches, or whatever) is needed.

A USB camera is mounted directly above the chessboard.

Most of the code runs on a Raspberry Pi, with Arduino code for inverse kinematics and stepper motor control.

The Hardware Build
The 3D printer files for the robot are freely available as specified in the links under “additional contributors”. The Tobler arm is modified with the longer arm components and mini-gripper. Tobler gives a great description of hardware, Arduino software and robot assembly.

Tobler also refers to a Community and there are some great diagrams there (linked to here under “Schematics”) - but be aware that the hardware build is for a slightly different robot arm with a belt drive.

So, we have a Raspberry Pi connected to an Arduino via a printer cable. The Arduino has a Ramps 1.4 board sitting on it to drive the motors via A4988 motor driver boards.

The stepper motors give very high precision and repeatability.

The Software Which Moves The Robot
All the Raspberry Pi code is written in Python 3.

So, we then have code which will move pieces, take pieces, castle, support en passant, and so on.

The chess engine is Stockfish - which can beat any human! “Stockfish is one of the strongest chess engines in the world. It is also much stronger than the best human chess grandmasters.”

I use some code from to validate the human’s move and interact with Stockfish. My code for recognising the human’s move and moving the robot arm interfaces with that.

On the Arduino, Inverse kinematics code is used in order to move the various motors correctly such that chess pieces can be moved. Code available from Tobler.

The Software for Recognising the Human’s Move
After the player has made their move, the camera takes a photo. The code crops and rotates this so that the chessboard exactly fits the subsequent image. The chessboard squares need to look square! There can be distortion in the image because the edges of the board are further away from the camera than the centre of the board is. However, the camera is far enough away so that, after the code crops the image, this distortion is not significant. Because the robot knows where all the pieces are after the computer move, then all that has to be done after the human makes a move is for the code to be able to tell the difference between the following three cases:

- An empty square
- A black piece of any kind
- A white piece of any kind

This covers all cases, including castling and en passant.

The robot checks that the human’s move is correct, and informs them if it isn’t! The only case not covered is where the human player promotes a pawn into a non-queen. The player has then to tell the robot what the promoted piece is.

We can now consider the image in terms of chessboard squares.

On the initial board set-up we know where all the white and black pieces are and where the empty squares are.

Empty squares have much less variation in colour than have occupied squares. We compute the standard deviation for each of the three RGB colours for each square across all its pixels (other than those near the borders of the square). The maximum standard deviation for any empty square is much less than the minimum standard deviation for any occupied square, and this allows us, after a subsequent player move, to determine which squares are empty.

Having determined the threshold value for empty versus occupied squares, we now need to determine the piece colour for occupied squares:

On the initial board we calculate for each white square, for each of R, G, B, the mean (average) value of its pixels (other than those near the borders of the square). The minimum of these means for any white square is greater than the maximum of the means across any black square, and so we can determine the piece colour for occupied squares. As stated previously, this is all we need to do in order to determine what the human player’s move was.

Other considerations
The algorithms work best if the chessboard has a colour that is a long way from the colour of the pieces! In my robot, the pieces are off-white and matt black, and the chess board is hand-made using a colour printer onto thin card. It can be seen in the video.

The chessboard should well-lit and evenly lit with minimal shadows from the chess pieces. Light should not be reflected back into the camera from the board or pieces. A sturdy table is needed.

My code contains routines for calibrating the camera and robot.”

Link to article