# Solving the Sliding Puzzle

| Comments

Sliding puzzle is a game composed by 2n - 1 pieces dispersed along a board and a blank space. Those pieces are then shuffled and the objective is to rearrange them back to their original form, where the disposition of pieces is on ascending order, like shown below (go on, it’s interactive):

You can rearrange the pieces “moving” the blank space across the board. Since you can only move it in four directions, it’s a hell of a task to solve this game for a human, sometimes taking hours. Luckily, we dispose of some good algorithms to solve it, taking only few milliseconds even for the worst case. Let’s explore them in this tutorial! :)

## Finding the correct abstraction

The hardest part of a problem is surely finding a useful abstraction for it, that allows a solution to be even thought! Like most path finding problems, the sliding puzzle can be correctly abstracted as a graph, i.e., a set of vertices connected by edges.

It’s common to use the term “state” to designate vertices. The meaning of a state depends on the problem. For example, for the sliding puzzle, each state is a determined disposition of pieces. Logically, there’s also a “goal state”, a state where the problem is solved. Finally, the edges are the allowed actions that takes our problem from a state to another. For example, in the sliding puzzle, the set of allowed actions is to move the blank space in four directions (up, down, left, right). The figure below illustrates well those concepts. Assimilated those concepts, our job is simply to find a path from any state to the goal state, and that can be done with any graph search algorithm. Let’s discuss the pros/cons of some approaches.

## Javascript implementation of Sliding Puzzle

Before discussing about specific algorithms, let’s implement the building blocks. I’ll start with a class called “Puzzle”, containing basically four attributes: dimension (dimension of our puzzle, i.e., 3 = 8-puzzle, 4 = 15-puzzle, etc.,…), the board (two-dimensional numeric array), the path and the last performed move (we will cover those last two later).

Let’s create some utilitary methods that will help during the craft of our solution:

And finally, let’s implement the methods that will move the pieces:

## Breadth-First Search (BFS)

The most well-known graph search algorithm, along with Depth-First Search (DFS). I believe you are already familiarized with this algorithm. It works really simple: For each visited node, its immediate children are stored on a queue, and it’s performed recursively until the queue is empty or a goal state is reached, that way transversing the graph “level-by-level”.

In order to implement the BFS, we are going to need first two method beforehand: “isGoalState” and “visit”. The “isGoalState” will check if the current state is a solution to the puzzle, while “visit” will generate the immediate children of the current state in the state space.

Let’s start with “isGoalState”. Well, it’s kinda simple: We are in a goal state if all pieces are in their places. The original place of a piece can be defined as `[(piece - 1) % dimension, (piece - 1) / dimension]`. Let’s take some examples to check if this formule makes sense:

Seems correct so far. That way our method is as follows:

About the “visit” method, first we need to know all allowed moves we can do in a certain state. For example, if the blank space is on the bottom-left of screen, we may not be able to move down nor left. Luckily, this functionality is already implemented through the method “getMove” described on previous section.

But knowing the allowed moves is not enough. We need to generate new states. In order to do that, we are going to need an utilitary method that makes a copy of the current state:

Notice that we are not just copying the board, but also the path. “path” is an attribute that stores the pieces that were moved so far, that way allowing us to reexecute the whole process when a goal state is found.

And now we can finally implement it:

Notice that we are ignoring moves that are equal to “lastMove”. There’s a reason behind it: Moving a piece that was already moved on the last turn will only make it to turn back to its original position! In another words, going back to a state that was already explored. That why we are “prunning” the tree avoiding this kind of behavior.

*sigh* After all those necessary things, we are now ready to finally implement the BFS algorithm, that is ridiculously simple:

We create an array called “states” to store the states that are waiting to be visited and put the current state on it. On a loop, we remove the first element (through the “shift” method, remember we are simulating a queue) and then check if that element is a goal state. If it is, return the sets of steps from our initial state until it (the “path” attribute), otherwise visit it and append the immediate children to the list of states.

*BONUS*! You can check a simulation below:

### BFS

Time elapsed: 0ms

PRO: It’s easy to be implemented.
CON: Waaaaaaay too slow.

## A*

As we saw previously, the BFS can correctly find an optimal solution to our problem, i.e., find a path from the starting state to the goal state with the minimum number of steps, but it has a huge drawback: it’s too slow! If you shuffle the game too much and try to run it, it will possibly freeze your browser.

So here’s the A*, the top #1 favorite algorithm for problem solving agents, and, good for us, it’s pretty simple too!

It works similarly to BFS, but with some differences: Instead of a queue, we use a priority queue (a.k.a., min-heap), that is a data structure that instead of returning the first element added, it returns the element with lowest value. And, to each discovered state, we assign a value to it, that can be defined as:

f(n) = g(n) + h(n)

g(n) (called real cost function) is the cost necessary to go from the starting state to the state n. Since we already discovered the whole path to it, we can easily calculate that cost with precision (for our sliding puzzle example, that cost can be represented as the path length, for example).

h(n) (called heuristic function) is the estimated cost to go from the state n to the goal state. But here’s the trick: We don’t know the path from state n to the goal state yet! It’s called heuristic precisely because we use heuristics to estimate it.

For priority-queue implementation, I’m going to use that you can you find here.

Let’s start initializing the algorithm:

Now, on a loop, we are going to retrieve the items with lowest value until the “states” variable is empty or a goal state is reached.

And finally, we are going to visit the retrieved state’s children, calculate their weights and insert them into the queue.

Good, good. Now let’s implement the “g” and “h” functions. For “g” function, I’ll simply count the path length (what else could be considered as the real cost?):

The heuristic function is the tricky part. We could think in many things. It’s important the heuristic be admissible, i.e., it must underestimate the real cost until the goal state. The closer the estimated value by heuristic function is to the real cost to go to the goal state, the better.

### Heuristic #1: Misplaced tiles

This function counts simply the number of pieces(tiles) that are not in their final position. This function is almost identical to the one we implemented to check if a state is the goal:

### A*: Misplaced tiles

Time elapsed: 0ms

### Heuristic #2: Manhattan distance

Instead of just counting the number of misplaced tiles, this heuristic function calculates the manhattan distance (L1 distance) between the current misplaced position and the final position. Manhattan distance can be calculated as:

d(x1, y1, x2, y2) = |x1 - x2| + |y1 - y2|

### A*: Manhattan distance

Time elapsed: 0ms

This heuristic is obviously better than the previous, since it always yields a higher value and hence closer to the real cost.

## Full code

You can get the full code here:

And since this page itself is utilizing this code for demonstration, you can also get it visualizing the source code.

## Conclusion

Well, that was a quite interesting tutorial. We discussed about space of states, goal state, graph search algorithms, A* and admissible heuristics. I hope you have enjoyed reading this tutorial as much as I did writing it. See ya on the next tutorial! :D