Category Archives: Python

Reinforcement Learning part 2: SARSA vs Q-learning

In my previous post about reinforcement learning I talked about Q-learning, and how that works in the context of a cat vs mouse game. I mentioned in this post that there are a number of other methods of reinforcement learning aside from Q-learning, and today I’ll talk about another one of them: SARSA. All the code used is from Terry Stewart’s RL code repository, and can be found both there and in a minimalist version on my own github: SARSA vs Qlearn cliff. To run the code, simply execute the cliff_Q or the cliff_S files.

SARSA stands for State-Action-Reward-State-Action. In SARSA, the agent starts in state 1, performs action 1, and gets a reward (reward 1). Now, it’s in state 2 and performs another action (action 2) and gets the reward from this state (reward 2) before it goes back and updates the value of action 1 performed in state 1. In contrast, in Q-learning the agent starts in state 1, performs action 1 and gets a reward (reward 1), and then looks and sees what the maximum possible reward for an action is in state 2, and uses that to update the action value of performing action 1 in state 1. So the difference is in the way the future reward is found. In Q-learning it’s simply the highest possible action that can be taken from state 2, and in SARSA it’s the value of the actual action that was taken.

This means that SARSA takes into account the control policy by which the agent is moving, and incorporates that into its update of action values, where Q-learning simply assumes that an optimal policy is being followed. This difference can be a little difficult conceptually to tease out at first but with an example will hopefully become clear.

Mouse vs cliff

Let’s look at a simple scenario, a mouse is trying to get to a piece of cheese. Additionally, there is a cliff in the map that must be avoided, or the mouse falls, gets a negative reward, and has to start back at the beginning. The simulation looks something like exactly like this:

mouse vs cliff
where the black is the edge of the map (walls), the red is the cliff area, the blue is the mouse and the green is the cheese. As mentioned and linked to above, the code for all of these examples can be found on my github (as a side note: when using the github code remember that you can press the page-up and page-down buttons to speed up and slow down the rate of simulation!)

Now, as we all remember, in the basic Q-learning control policy the action to take is chosen by having the highest action value. However, there is also a chance that some random action will be chosen; this is the built-in exploration mechanism of the agent. This means that even if we see this scenario:

mouse and cliff
There is a chance that that mouse is going to say ‘yes I see the best move, but…the hell with it’ and jump over the edge! All in the name of exploration. This becomes a problem, because if the mouse was following an optimal control strategy, it would simply run right along the edge of the cliff all the way over to the cheese and grab it. Q-learning assumes that the mouse is following the optimal control strategy, so the action values will converge such that the best path is along the cliff. Here’s an animation of the result of running the Q-learning code for a long time:


The solution that the mouse ends up with is running along the edge of the cliff and occasionally jumping off and plummeting to its death.

However, if the agent’s actual control strategy is taken into account when learning, something very different happens. Here is the result of the mouse learning to find its way to the cheese using SARSA:


Why, that’s much better! The mouse has learned that from time to time it does really foolish things, so the best path is not to run along the edge of the cliff straight to the cheese but to get far away from the cliff and then work its way over safely. As you can see, even if a random action is chosen there is little chance of it resulting in death.

Learning action values with SARSA

So now we know how SARSA determines it’s updates to the action values. It’s a very minor difference between the SARSA and Q-learning implementations, but it causes a profound effect.

Here is the Q-learning learn method:

def learn(self, state1, action1, reward, state2):
    maxqnew = max([self.getQ(state2, a) for a in self.actions])
    self.learnQ(state1, action1,
                reward, reward + self.gamma*maxqnew)

And here is the SARSA learn method

def learn(self, state1, action1, reward, state2, action2):
    qnext = self.getQ(state2, action2)
    self.learnQ(state1, action1,
                reward, reward + self.gamma * qnext)

As we can see, the SARSA method takes another parameter, action2, which is the action that was taken by the agent from the second state. This allows the agent to explicitly find the future reward value, qnext, that followed, rather than assuming that the optimal action will be taken and that the largest reward, maxqnew, resulted.

Written out, the Q-learning update policy is Q(s, a) = reward(s) + alpha * max(Q(s')), and the SARSA update policy is Q(s, a) = reward(s) + alpha * Q(s', a'). This is how SARSA is able to take into account the control policy of the agent during learning. It means that information needs to be stored longer before the action values can be updated, but also means that our mouse is going to jump off a cliff much less frequently, which we can probably all agree is a good thing.

Tagged , ,

Wrapping MapleSim C code for Python

Previously, I worked through a simple inverse kinematics example with at three link arm. The arm, however, was uninteresting, and in fact had no dynamics of its own. I’ve also previously (but not in a blog post) built up a number of arm models using MapleSim that were interesting, and ported them into C. From there I used them in Matlab / Simulink, but I’m on a Python kick now and tired of Matlab; so I’ve been looking at how to wrap this C code generated by MapleSim using Cython such that I can interact with it from Python.

So the code that simulates my models is exported to C, which is great in that it’s super fast, but it’s bad because it’s a pain in the ass to work with. I don’t want to have to write my controllers and learning algorithms all in C, it would be much nicer to do it all in Python. So I was looking at interfacing Python with the C library, and did a little (as seen in my last post), but then I already know how to make Cython play nice with C++ so I figured let’s shoot for that instead. What follows now is a hacky guide to getting your MapleSim model simulations to work with Python. First we’ll get the C code to just run, then we’ll port it to Python, then we’ll get it going graphically.

Get the MapleSim code to run in C++ with classes and input

Step 1: Rename your file from ‘whatever.c’ to ‘whatever.cpp’, in the code here I’m renaming ‘c2LinkArm.c’ to ‘2LinkArm.cpp’. I know, I know, it’s painful but hopefully that’s the worst of it.

Step 2: Add #include "mplshlib.h" to the beginning of the file. The code is meant to be run in a MapleSim environment (or something), so it requires one of the Maple library files. We’ll just add this in here and everything should be happy.

Step 3: For whatever reason, there is a function in here that sets all of the system input to 0. Why, you ask? Good question. The function in question is static void inpfn(double T, double *U), and you can see that all it does is set every element of U = 0. Now, you can either comment out the body of this function, or, several lines down, the first line of the SolverUpdate function (after a long i variable declaration) is the call to this function, so you can comment that out and then everything’s fine. Hurray! We can give input to the system now!

And that’s all the inline code editing we have to do. We’re not done with this file though, we still need to append some things so that we can use it easily.

Step 4: The reason that I want to use C++ is because with C++ we can make a class, which we’ll call the Sim class, that can store all of our simulation details, and hold some wrapper functions to hide the full blown interface to the simulator functions. Let’s make that class now. Go down to the bottom of the file and paste in the following code:
c2LinkArm.cpp

/*  A class to contain all the information that needs to
    be passed around between these functions, and can
    encapsulate it and hide it from the Python interface.

    Written by Travis DeWolf (May, 2013)
*/
class Sim {
    /* Very simple class, just stores the variables we
    need for simulation, and has 2 functions. Reset
    resets the state of the simulation, and step steps it
    forward. Tautology ftw!*/

    double* params;
    double dt, t0;
	double u0[NINP], other_out[NOUT+1], y[NOUT];
    double w[1 + 2 * NEQ + NPAR + NDFA + NEVT];

    SolverStruct S;

    public:
        Sim(double dt_val, double* params_pointer);
        void reset(double* out, double* ic);
        void step(double* out, double* u);
};

Sim::Sim(double dt_val, double* params_pointer)
{
    t0 = 0.0; // set up start time
    dt = dt_val; // set time step
    for (int i = 0; i < NINP; i++)
        u0[i] = 0.0; // initial control signal

    params = params_pointer; // set up parameters reference

	/* Setup */
	S.w = w;
	S.err = 0;
}

void Sim::reset(double* out, double* ic)
{
	SolverSetup(t0, ic, u0, params, y, dt, &S);

	/* Output */
	out[0] = t0;
        for(int j = 0; j < NOUT; j++)
            out[j + 1] = y[j];
}

void Sim::step(double* out, double* u)
/* u: control signal */
{
    for (int k = 0; k < NOUT; k++)
        out[k] = *dsn_undef; // clear values to nan

	/* Integration loop */
    /* Take a step with states */
    EulerStep(u, &S);

    if (S.err <= 0)
    {
        /* Output */
        SolverOutputs(y, &S);

        out[0] = S.w[0];
        for(long j = 0; j < NOUT; j++)
            out[j + 1] = y[j];
    }
}

int main (void)
{
    FILE *fd;

    double *ic, *p, *out;
    char* errbuf;
    long i, j, outd;
    long internal = 0;

    double dt = 0.00001;

    int time_steps = 1000000;
    double u[NINP];
    for (int k = 0; k < NINP; k++) u[k] = .1;

    fd = fopen("output.dat", "w");

    Sim sim = Sim(dt, NULL);
    sim.reset(out, NULL); // ic = NULL, use default start state

    for(i=0;i<time_steps;i++)
    {
        sim.step(out, u);
        fprintf(fd,"%lf ",out[0]);
        for(j=0;j<NOUT;j++)
        {
            fprintf(fd,"%lf ",out[j+1]);
        }
        fprintf(fd, "\n");
    }

    fclose(fd);

    return 0;
}

So, this is based off of the ParamDriverC() function from the MapleSim generated code. I just separated out the initialization and the simulation into two functions (reset() and step(), respectively), so that it’s possible to explicitly control the simulation stepping through time, and change the input control signal at every time step.

NOTE: The dt here must be <= 1e-5, or the constraint solver in the code throws an error. Don’t fear though, this simulator is still crazy fast.

OK. This code then can be compiled (once you have the library files in the same folder) and run with:

g++ c2LinkArm.cpp -out sim
./sim

You may have noticed a bunch of warning messages scrolling by, I’m not about to touch those and they don’t affect the simulation, so as long as one saying ‘error’ doesn’t pop up let’s just leave well enough alone.

Once you’ve run the executable you should now have a file called ‘output.dat’ filled with simulation data. The first column is time, and the rest are the output variables from your MapleSim model. Alright, we’ve got our code up and running! Let’s go an and make a Cython wrapper for it.

Making a Cython wrapper for the MapleSim code
The first thing that we need to do is take the main function we just added in out of the simulation code above. We’re done with running the C code on its own so we don’t need it anymore. Get rid of it already.

Now, this is going to be very similar to the code from the previous Cython posts, but in this one there’s no getting around it: We need to pass back and forth arrays. It turns out this isn’t very complicated, but there’s a template you have to follow and it can be difficult if you don’t know what that is.

I’ll post the code and then we can go through the tricky parts. As always, we create a ‘pyx’ file for our Cython wrapper code, I called this one ‘py2LinkArm.pyx’.
py2LinkArm.pyx

import numpy as np
cimport numpy as np

cdef extern from "c2LinkArm.cpp": 
    cdef cppclass Sim:
        Sim(double dt, double* params)
        void reset(double* out, double* ic)
        void step(double* out, double* u)

cdef class pySim:
    cdef Sim* thisptr
    def __cinit__(self, double dt = .00001, 
                        np.ndarray[double, mode="c"] params=None):
        """
        param float dt: simulation timestep, must be < 1e-5
        param array params: MapleSim model internal parameters
        """
        if params: self.thisptr = new Sim(dt, &params[0])
        else: self.thisptr = new Sim(dt, NULL)

    def __dealloc__(self):
        del self.thisptr

    def reset(self, np.ndarray[double, mode="c"] out, 
                    np.ndarray[double, mode="c"] ic=None):
        """
        Reset the state of the simulation.
        param np.ndarray out: where to store the system output
            NOTE: output is of form [time, output]
        param np.ndarray ic: the initial conditions of the system
        """
        if ic: self.thisptr.reset(&out[0], &ic[0])
        else: self.thisptr.reset(&out[0], NULL)

    def step(self, np.ndarray[double, mode="c"] out, 
                   np.ndarray[double, mode="c"] u):
        """
        Step the simulation forward one timestep.
        param np.ndarray out: where to store the system output
            NOTE: output is of form [time, output]
        param np.ndarray u: the control signal
        """
        self.thisptr.step(&out[0], &u[0])

This is pretty familiar: We make a reference to a class defined in C++, and then we wrap it in a python class and define functions that we can actually call from Python.

The main difference here is that we have to handle passing in numpy arrays, and having them be in the appropriate double* form that our C++ code is expecting to see. There are three parts involved in this. First, note at the top that we have an import numpy as np followed by a cimport numpy as np, this gives us access to the functions that we’ll need. Second, for each of the arrays accepted as parameter we have a special type declaration: np.ndarray[double, mode="c"] var. And finally, we pass the arrays on with a dereferencing &var[0]. If we were passing along 2D arrays we would use &var[0][0].

In the __init__() and reset() methods, I’m allowing the model parameters, params, and initial conditions, ic, arrays to be undefined, in which case NULL is passed to our C code and the default values we defined inside MapleSim are used. The reason for this is that these require some pretty intimate knowledge of the MapleSim model, and if a casual user wants to just use this business they shouldn’t have to know anything about the internal states.

With our wrapper code done now the only thing left is our setup file! This one is just straight from the previous posts, with the exception that only the ‘pyx’ file is included in the sources declaration because our ‘py2LinkArm.pyx’ file references the ‘c2LinkArm.cpp’ file directly, instead of referencing a header file.
setup.py

from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
 
setup(
  cmdclass = {'build_ext': build_ext},
  ext_modules=[Extension("py2LinkArm", 
               sources=["py2LinkArm.pyx"],
               language="c++"),],
)

With our setup file we should now be good to go. Let’s compile! To do this I pull open IPython and punch in

run setup.py build_ext -i

And it will complain and throw some warnings, but it should compile just fine. And now we have access to the pySim class!
Note: that should read ‘build_ext’ with an underscore in there, I’m not sure why it’s not displaying but that underscore needs to be there or it won’t run.
To see it do some things, type in

import numpy as np
import py2LinkArm

sim = py2LinkArm.pySim()

u = np.ones(2) # input array
out = np.zeros(3) # output array

sim.reset(out)
print out # initial values

for i in range(10):
    sim.step(out, u)
    print out

And we can see it moving forward! If you set u = np.ones(2) * .1, which is what it was when we generated the ‘output.dat’ file using the C code only and run it forward you’ll see that the we get the same results for both. Excellent. OK now let’s display it!

Running and plotting the MapleSim arm model
In a previous post I used Pyglet to plot out a 3 link arm (a very poor, lines only plot, but sufficient), and I’m going to use it again here (for a very poor, lines only plot). The file is called ‘run.py’, and is less than 80 lines. I’ll post it and then walk through it below:
run.py

import numpy as np
import pyglet
import time

import py2LinkArm

def run(): 
    """Runs and plots a 2 link arm simulation generated by MapleSim."""

    # set up input control signal to simulation
    u = np.ones(2) * .1 
    # set up array to receive output from simulation
    out = np.zeros(3) 

    # create MapleSim sim class 
    sim = py2LinkArm.pySim() 
    # set up initial conditions of the system
    sim.reset(out) 

    ###### Plotting things

    # segment lengths = (.31, .27)m (defined in MapleSim)
    L = np.array([31, 27]) * 3

    # make our window for drawin'
    window = pyglet.window.Window()

    # set up some labels to display time 
    label_time = pyglet.text.Label('time', font_name='Times New Roman', 
        font_size=36, x=window.width//2, y=window.height - 50,
        anchor_x='center', anchor_y='center')
    # and joint angles
    label_values = pyglet.text.Label('values', font_name='Times New Roman', 
        font_size=36, x=window.width//2, y=window.height - 100,
        anchor_x='center', anchor_y='center')

    def update_arm(dt):
        """ Simulate the arm ahead a chunk of time, then find the 
        (x,y) coordinates of each joint, and update labels."""
   
        # simulate arm forward 15ms
        for i in range(1500):
            sim.step(out, u)

        # find new joint (x,y) coordinates, offset to center of screen-ish
        x = np.array([ 0, 
            L[0]*np.cos(out[1]),
            L[0]*np.cos(out[1]) + L[1]*np.cos(out[1]+out[2])]) + window.width/2
        y = np.array([ 0, 
            L[0]*np.sin(out[1]),
            L[0]*np.sin(out[1]) + L[1]*np.sin(out[1]+out[2])]) + 100

        # update line endpoint positions for drawing
        window.jps = np.array([x, y]).astype('int')
        label_time.text = 'time: %.3f'%out[0]
        label_values.text = '(01, 02) = (%.2f, %.2f)'%(out[1], out[2])
 
    update_arm(0) # call once to set up

    @window.event
    def on_draw():
        window.clear()
        label_time.draw()
        label_values.draw()
        for i in range(2): 
            pyglet.graphics.draw(2, pyglet.gl.GL_LINES, ('v2i', 
                (window.jps[0][i], window.jps[1][i], 
                 window.jps[0][i+1], window.jps[1][i+1])))

    # set up update_arm function to be called super often
    pyglet.clock.schedule_interval(update_arm, 0.00001)
    pyglet.app.run()

run()

Alright. In the first part, we’re just doing the same as above when we imported our brand new module and set it up for simulation.
What’s different is that we’ve now pushed the actual simulation into a method called update_arm. The idea is that the drawing is significantly slower than the simulation, so we’ll simulate a bunch, then update the plots once, then repeat, and hopefully get something that simulates at a decent speed. Once we have the joint angles of the arm stored safely in our out variable, we calculate the (x,y) locations of the elbow and wrist and update the system. Then the last thing this method does is update the text of the labels in the window.

In the on_draw method we’ve overwritten, we clear the window, then draw our labels and the lines corresponding to upper and lower arm segments.

Then the last thing is to schedule our update_arm method to be called periodically, in this case in extremely often.

When you run this file, you should see something that looks like:
pic
You’ll have to trust me it looks better when it’s moving, and when it’s moving it will look better when the input isn’t constant.

But there you go! Now you are able to take your very own MapleSim model’s optimized C code, append some C++ code to make a Sim class, wrap it in Cython, and access it as a Python module! Why would anyone ever want to go through this trouble? Well, finding good simulations with appropriate dynamics can be a huge pain in the ass, and sometimes you just want something stand-alone. Also, now that this is done, this exact same code should be good for any MapleSim model, so hopefully the process is relatively straightforward. At least that’s my dream.

The code that I’ve used in this post can be found at my github: 2LinkArm.

Addendum

This was all written for compiling on Ubuntu, my labmate and pal Xuan Choo also went through getting this all to work on Windows too, here are his notes!

Step 1: Get and install cython

Step 2: Follow the instructions here. Essentially:

  • 2a: Download and install the windows SDK for your python version
  • 2b: Start the windows SDK command shell (start menu->programs->windows sdk->CMD shell)
  • 2c: Type these things into the cmd shell:
    set DISTUTILS_USE_SDK=1
    setenv /x64 /release
    
  • 2d: in the same cmd shell, navigate to the folder with the pyx file you want to compile, and run:
    python setup.py build_ext -i
    
  • 2d*: If it complains about undefined numpy references, add “include_dirs=[numpy.get_include()]” to the Extension class call (also import numpy in setup.py).
    from distutils.core import setup
    from distutils.extension import Extension
    from Cython.Distutils import build_ext
    import numpy
    
    setup(
      cmdclass = {'build_ext': build_ext},
      ext_modules=[Extension("py3LinkArm",
                   sources=["py3LinkArm.pyx"],
                   language="c++",
                   include_dirs=[numpy.get_include()]),],
    )
    

Done!

Tagged , , , , ,

Cython Journey Part 3: Wrapping C code and passing arrays

Basic C interfacing example

OK, so this code is taken straight from an example here, I’m just going to explicitly work through all the steps so that when I try to do this again later I don’t have to look everything up again.

Let’s say we have some c code that we want to work with from Python, in a file called ‘square.c’:

void square(double* array, int n) {
    int i;
    for (i=0; i<n; ++i) {
        array[i] = array[i] * array[i];
    }
}

The first thing we’re going to need to do is compile this to a shared object file, so we punch the following into our terminal:

gcc -c square.c # compile the code to an object file
gcc square.o -shared -o square.so # compile to shared object file

And so now we have our square.so file. Super.

Here is the code that we can then use to interact with this shared object file from Python:

import numpy as np
import ctypes

square = ctypes.cdll.LoadLibrary("./square.so")

n = 5
a = np.arange(n, dtype=np.double)

aptr = a.ctypes.data_as(ctypes.POINTER(ctypes.c_double))
square.square(aptr, n)

print a

OK, so there are a couple of lines with a lot of jargon but aside from that this looks pretty straightforward. The first noticeable thing is that we’re importing the ctypes library. So, make sure that you have that installed before you try to run this code. Right after importing ctypes then we create a handle into our shared object file, using the ctypes.cdll.LoadLibrary() command.

Then it’s some standard python code to create an array, and then there’s this funky command that lets us get a handle to the array that we just created as a c pointer that we can pass in to our square() function. You can see a list of all the different kinds of data types here.

Once we have that handle to our array, we simply pass it through to the square.square() function, and let it go and do whatever it’s going to do with it. Then we print out our array a, and to our great joy, all of the values have been squared.

So that’s that. Instead of returning anything like double* from the C library I would recommend just passing in the array as a parameter and modifying it instead. This is all nice and straightforward enough, and it seems like the returning business can lead to a lot of memory allocation and ownership headaches.

The code for this example can be found at my github, here: Square.

Tagged , , , ,

Inverse kinematics of 3-link arm with constrained minimization in Python

Inverse kinematics is a common topic in robotics control; one that most anyone working with a robotic arm needs to address at some point. The use case I’ll be looking at here is the situation where we want to specify an (x,y) coordinate for the end-effector, and solve for the appropriate joint angle configuration that best achieves this. It’s surprisingly straightforward to handle with the right approach, so it’s a nice little exercise in Python optimization (and we’ll even work in some graphical display!) And per usual, the code is all up on my github.

Most of the time the servos that anyone is working with are position controlled, meaning that you provide them with a set of desired angles for them to move to, rather than force controlled, where you provide them with a torque signal that directly moves them. In this case, inverse kinematics is all you need. You simply work out the set of joint angles you want the servos to move to and send those out directly. In biological systems this clearly isn’t the end of the work, because even if you know where you want to go you still have to provide a force (muscle activation) signal that will move your system there. But! as mentioned in the motor babbling paper review post I wrote a while back, having the target specified in the most basic space (i.e. joint angles instead of (x,y) coordinates for a servo based arm, or muscle lengths instead of joint angles for a muscle based arm) can make calculating, or learning, the appropriate force signal much easier; so something that can do the inverse kinematics calculation is worth having.

Inverse kinematics

Now then, how do we go about finding the joint angle configuration that best achieves a desired (x,y) coordinate? Constrained minimization! Our constraint is that the tip of the arm must be at the specified (x,y) position, and we need to specify some cost function for the system to minimize to choose among all the possible configurations that accomplish this. The cost function that we’re going to use here is a very intuitive one, that I first encountered in the paper ‘Learning operational space control’ by Jan Peters. What we’re going to do is specify a default, or resting, joint state configuration, and minimize our distance from that. Which makes a lot of sense.

I’m going to post the code for calculating the inverse kinematics of a 3-link arm, and then we’ll work through it.

import math
import numpy as np
import scipy.optimize  

class Arm3Link:
    
    def __init__(self, q=None, q0=None, L=None):
        """Set up the basic parameters of the arm.
        All lists are in order [shoulder, elbow, wrist].
        
        :param list q: the initial joint angles of the arm
        :param list q0: the default (resting state) joint configuration
        :param list L: the arm segment lengths
        """
        # initial joint angles
        if q is None: q = [.3, .3, 0]
        self.q = q
        # some default arm positions
        if q0 is None: q0 = np.array([math.pi/4, math.pi/4, math.pi/4]) 
        self.q0 = q0
        # arm segment lengths
        if L is None: L = np.array([1, 1, 1]) 
        self.L = L
        
        self.max_angles = [math.pi, math.pi, math.pi/4]
        self.min_angles = [0, 0, -math.pi/4]

    def get_xy(self, q=None):
        if q is None: q = self.q

        x = self.L[0]*np.cos(q[0]) + \
            self.L[1]*np.cos(q[0]+q[1]) + \
            self.L[2]*np.cos(np.sum(q))

        y = self.L[0]*np.sin(q[0]) + \
            self.L[1]*np.sin(q[0]+q[1]) + \
            self.L[2]*np.sin(np.sum(q))

        return [x, y]

    def inv_kin(self, xy):

        def distance_to_default(q, *args): 
            # weights found with trial and error, get some wrist bend, but not much
            weight = [1, 1, 1.3] 
            return np.sqrt(np.sum([(qi - q0i)**2 * wi
                for qi,q0i,wi in zip(q, self.q0, weight)]))

        def x_constraint(q, xy):
            x = ( self.L[0]*np.cos(q[0]) + self.L[1]*np.cos(q[0]+q[1]) + 
                self.L[2]*np.cos(np.sum(q)) ) - xy[0]
            return x

        def y_constraint(q, xy): 
            y = ( self.L[0]*np.sin(q[0]) + self.L[1]*np.sin(q[0]+q[1]) + 
                self.L[2]*np.sin(np.sum(q)) ) - xy[1]
            return y
        
        return scipy.optimize.fmin_slsqp( func=distance_to_default, 
            x0=self.q, eqcons=[x_constraint, y_constraint], 
            args=[xy], iprint=0) # iprint=0 suppresses output


I’ve taken out most of the comments for compactness, but there are plenty in the code on github, don’t you worry. Alright, let’s take a look!

First, we go through and set up the parameters of our arm. If nothing was passed in during construction, then we’re going to make an arm that is initially at it’s resting position, with shoulder and elbow joint angles = \frac{\pi}{4}, and wrist angle = 0. We also set up the arm segment lengths to all be 1, and the minimum and maximum joint angles.

Next, there’s the get_xy function, which just uses some simple trig to calculate the current (x,y) position of the end-effector. This is going to be a handy thing to have when we want to see how well the joint configurations we calculate achieve our desired hand position.

And then after this, we have the actual inverse kinematics function. inv_kin takes in a desired (x,y) position, and returns to us a list of the joint angles that will best achieve this. We define three methods inside inv_kin that define our constrained optimization problem, distance_to_default, which is the function we are minimizing, and x_constraint and y_constraint, which are the constraints that must be met for a solution to be valid. These are all pretty straight forward functions, the distance_to_default function calculates the Euclidean distance in joint space to the resting state, and the constraint functions calculate the difference between the actual and desired end-effector positions for a given set of joint angles.

There is one point of interest in the distance_to_default method, and that is the use of the weights therein. What is the point of applying a gain to these values, you ask? Why, simply that it lets us weight the relative importance of each of these joints remaining close to their default configuration! If you think about moving your arm on a horizontal plane, the wrist actually bends very little. To mimic this we can set the weight applied to the distance of the wrist from its resting state higher than those of the other joints, so that it becomes a higher priority to keep the wrist near its resting state. I suggest playing around with once the graphical display (below) is built up, it’s fun to see the effects different weightings can give on joint configurations.

With these three functions defined, we can now call up the scipy.optimize.fmin_slsqp function, which performs the sequential least squares method to arrive at a solution.

Really, that’s all there is to it!

To test this now, we have the following method:

def test():
    ############Test it!##################

    arm = Arm3Link()

    # set of desired (x,y) hand positions
    x = np.arange(-.75, .75, .05)
    y = np.arange(0, .75, .05)

    # threshold for printing out information, to find trouble spots
    thresh = .025

    count = 0
    total_error = 0
    # test it across the range of specified x and y values
    for xi in range(len(x)):
        for yi in range(len(y)):
            # test the inv_kin function on a range of different targets
            xy = [x[xi], y[yi]]
            # run the inv_kin function, get the optimal joint angles
            q = arm.inv_kin(xy=xy)
            # find the (x,y) position of the hand given these angles
            actual_xy = arm.get_xy(q)
            # calculate the root squared error
            error = np.sqrt((np.array(xy) - np.array(actual_xy))**2)
            # total the error 
            total_error += error
            
            # if the error was high, print out more information
            if np.sum(error) > thresh:
                print '-------------------------'
                print 'Initial joint angles', arm.q 
                print 'Final joint angles: ', q
                print 'Desired hand position: ', xy
                print 'Actual hand position: ', actual_xy
                print 'Error: ', error
                print '-------------------------'
            
            count += 1

    print '\n---------Results---------'
    print 'Total number of trials: ', count
    print 'Total error: ', total_error
    print '-------------------------'

Which is simply working through a set of target (x,y) points, and calculating the total error across all of them. If any particular point gives an error above the threshold level, print out the information for that point so we can take a closer look if so desired. Fortunately, the threshold error isn’t even approached in this test, and for output we get:

---------Results---------
Total number of trials:  450
Total error:  [  3.33831421e-05   2.89667496e-05]
-------------------------

Not bad!

Visualization

So that’s all great and dandy, but it’s always nice to be able to really visualize these things, so I wrote another method that uses Pyglet to help visualize. This is a really easy to use graphics program that I tested out as a means here, and I was very happy with it. I ended up writing a method that pops up a window with a 3-link arm drawn on it, and the arm uses the above inverse kinematics function in the Arm class written above to calculate the appropriate joint angles for the arm to be at to follow the mouse. Once the joint angles are calculated, then their (x,y) locations are also calculated, and the arm is drawn. Pleasingly, the inv_kin above is fast enough to work in real time, so it’s a kind of fun toy example. Again, I’ll show the code and then we’ll work through it below.

import numpy as np
import pyglet
import time

import Arm

def plot(): 
    """A function for plotting an arm, and having it calculate the 
    inverse kinematics such that given the mouse (x, y) position it 
    finds the appropriate joint angles to reach that point."""

    # create an instance of the arm
    arm = Arm.Arm3Link(L = np.array([300,200,100]))

    # make our window for drawin'
    window = pyglet.window.Window()

    label = pyglet.text.Label('Mouse (x,y)', font_name='Times New Roman', 
        font_size=36, x=window.width//2, y=window.height//2,
        anchor_x='center', anchor_y='center')

    def get_joint_positions():
        """This method finds the (x,y) coordinates of each joint"""

        x = np.array([ 0, 
            arm.L[0]*np.cos(arm.q[0]),
            arm.L[0]*np.cos(arm.q[0]) + arm.L[1]*np.cos(arm.q[0]+arm.q[1]),
            arm.L[0]*np.cos(arm.q[0]) + arm.L[1]*np.cos(arm.q[0]+arm.q[1]) + 
                arm.L[2]*np.cos(np.sum(arm.q)) ]) + window.width/2

        y = np.array([ 0, 
            arm.L[0]*np.sin(arm.q[0]),
            arm.L[0]*np.sin(arm.q[0]) + arm.L[1]*np.sin(arm.q[0]+arm.q[1]),
            arm.L[0]*np.sin(arm.q[0]) + arm.L[1]*np.sin(arm.q[0]+arm.q[1]) + 
                arm.L[2]*np.sin(np.sum(arm.q)) ])

        return np.array([x, y]).astype('int')
    
    window.jps = get_joint_positions()

    @window.event
    def on_draw():
        window.clear()
        label.draw()
        for i in range(3): 
            pyglet.graphics.draw(2, pyglet.gl.GL_LINES, ('v2i', 
                (window.jps[0][i], window.jps[1][i], 
                 window.jps[0][i+1], window.jps[1][i+1])))

    @window.event
    def on_mouse_motion(x, y, dx, dy):
        # call the inverse kinematics function of the arm
        # to find the joint angles optimal for pointing at 
        # this position of the mouse 
        label.text = '(x,y) = (%.3f, %.3f)'%(x,y)
        arm.q = arm.inv_kin([x - window.width/2, y]) # get new arm angles
        window.jps = get_joint_positions() # get new joint (x,y) positions

    pyglet.app.run()

plot()

There are really only a few things that we’re doing in this method: We create an arm with somewhat normal segment lengths and a window for plotting; we have a function for calculating the (x,y) coordinates of each of the joints for plotting; we have a function that tracks the position of the mouse and updates our arm joint angles by calling the arm.inv_kin function we wrote above; and lastly we plot it! Easy. The functions aren’t quite in that order, and that’s because I chose alphabetical ordering over conceptual. Here’s a picture! armplot

I think the only kind of weird thing going on in here is that my variables that I use in both this method and in the overwritten window methods are defined as belonging to window. Aside from that, it’s straightforward trig to calculate the joint angle Cartesian coordinates, and then there’s also a label that displays the mouse (x,y) position for the hell of it. Honestly, the label was in a Pyglet tutorial example and I just left it in.

It’s not going to win any awards for graphics by any means, but it’s simple and it works! And, assuming you have Pyglet installed (which is very easy to do), you should straight up be able to run it and be amazed. Or a little impressed. Or not, whatever. But in any case now you have your very own inverse kinematics example on a 3-link arm! Moving to any other number of joints is exactly the same, you just have to add the corresponding number of default joint configurations and everything should work out as is.

From here, the plan is to look at bringing in an actual force driven arm model to the mix, and then building up a system that can track the mouse, but with a control signal that specifies forces to apply to the arm, rather than positions. Which I think will be pretty neat. And in case you missed it at the top, here’s a link to all the code on my github.

Tagged , , ,

Reinforcement learning part 1: Q-learning and exploration

We’ve been running a reading group on Reinforcement Learning (RL) in my lab the last couple of months, and recently we’ve been looking at a very entertaining simulation for testing RL strategies, ye’ old cat vs mouse paradigm. There are a number of different RL methods you can use / play with in that tutorial, but for this I’m only going to talk about Q-learning. The code implementation I’ll be using is all in Python, and the original code comes from one of our resident post-docs, Terry Stewart, and can be garnered from his online RL tutorial. The code I modify here is based off of Terry’s code and modified by Eric Hunsberger, another PhD student in my lab. I’ll talk more about how it’s been modified in another post.

Q-learning review
For those unfamiliar, the basic gist of Q-learning is that you have a representation of the environmental states s, and possible actions in those states a, and you learn the value of each of those actions in each of those states. Intuitively, this value, Q, is referred to as the state-action value. So in Q-learning you start by setting all your state-action values to 0 (this isn’t always the case, but in this simple implementation it will be), and you go around and explore the state-action space. After you try an action in a state, you evaluate the state that it has lead to. If it has lead to an undesirable outcome you reduce the Q value (or weight) of that action from that state so that other actions will have a greater value and be chosen instead the next time you’re in that state. Similarly, if you’re rewarded for taking a particular action, the weight of that action for that state is increased, so you’re more likely to choose it again the next time you’re in that state. Importantly, when you update Q, you’re updating it for the previous state-action combination. You can only update Q after you’ve seen what results.

Let’s look at an example in the cat vs mouse case, where you are the mouse. You were in the state where the cat was in front of you, and you chose to go forward into the cat. This caused you to get eaten, so now reduce the weight of that action for that state, so that the next time the cat is in front of you you won’t choose to go forward you might choose to go to the side or away from the cat instead (you are a mouse with respawning powers). Note that this doesn’t reduce the value of moving forward when there is no cat in front of you, the value for ‘move forward’ is only reduced in the situation that there’s a cat in front of you. In the opposite case, if there’s cheese in front of you and you choose ‘move forward’ you get rewarded. So now the next time you’re in the situation (state) that there’s cheese in front of you, the action ‘move forward’ is more likely to be chosen, because last time you chose it you were rewarded.

Now this system, as is, gives you no foresight further than one time step. Not incredibly useful and clearly too limited to be a real strategy employed in biological systems. Something that we can do to make this more useful is include a look-ahead value. The look-ahead works as follows. When we’re updating a given Q value for the state-action combination we just experienced, we do a search over all the Q values for the state the we ended up in. We find the maximum state-action value in this state, and incorporate that into our update of the Q value representing the state-action combination we just experienced.

For example, we’re a mouse again. Our state is that cheese is one step ahead, and we haven’t learned anything yet (blank value in the action box represents 0). So we randomly choose an action and we happen to choose ‘move forward’.
Now, in this state (we are on top of cheese) we are rewarded, and so we go back and update the Q value for the state ‘cheese is one step ahead’ and the action ‘move forward’ and increase the value of ‘move forward’ for that state.
Now let’s say the cheese is moved and we’re moving around again, now we’re in the state ‘cheese is two steps ahead’, and we make a move and end up in the state ‘cheese is one step ahead’.
Now when we are updating the Q value for the previous state-action combination we look at all the Q values for the state ‘cheese is one step ahead’. We see that one of these values is high (this being for the action ‘move forward’) and this value is incorporated in the update of the previous state-action combination.
Specifically we’re updating the previous state-action value using the equation: Q(s, a) += alpha * (reward(s,a) + max(Q(s') - Q(s,a)) where s is the previous state, a is the previous action, s' is the current state, and alpha is the discount factor (set to .5 here).

Intuitively, the change in the Q-value for performing action a in state s is the difference between the actual reward (reward(s,a) + max(Q(s'))) and the expected reward (Q(s,a)) multiplied by a learning rate, alpha. You can think of this as a kind of PD control, driving your system to the target, which is in this case the correct Q-value.

Here, we evaluate the reward of moving ahead when the cheese is two steps ahead as the reward for moving into that state (0), plus the reward of the best action from that state (moving into the cheese +50), minus the expected value of that state (0), multiplied by our learning rate (.5) = +25.

Exploration
In the most straightforward implementation of Q-learning, state-action values are stored in a look-up table. So we have a giant table, which is size N x M, where N is the number of different possible states, and M is the number of different possible actions. So then at decision time we simply go to that table, look up the corresponding action values for that state, and choose the maximum, in equation form:

def choose_action(self, state):
    q = [self.getQ(state, a) for a in self.actions]
    maxQ = max(q)
    action = self.actions[maxQ]
    return action

Almost. There are a couple of additional things we need to add. First, we need to cover the case where there are several actions that all have the same value. So to do that, if there are several with the same value, randomly choose one of them.

def choose_action(self, state):
    q = [self.getQ(state, a) for a in self.actions]
    maxQ = max(q)
    count = q.count(maxQ)
    if count > 1:
        best = [i for i in range(len(self.actions)) if q[i] == maxQ]
        i = random.choice(best)
    else:
        i = q.index(maxQ)

    action = self.actions[i]
    return action

This helps us out of that situation, but now if we ever happen upon a decent option, we’ll always choose that one in the future, even if there is a way better option available. To overcome this, we’re going to need to introduce exploration. The standard way to get exploration is to introduce an additional term, epsilon. We then randomly generate a value, and if that value is less than epsilon, a random action is chosen, instead of following our normal tactic of choosing the max Q value.

def choose_action(self, state):
    if random.random() < self.epsilon: # exploration action = random.choice(self.actions) else: q = [self.getQ(state, a) for a in self.actions] maxQ = max(q) count = q.count(maxQ) if count > 1:
            best = [i for i in range(len(self.actions)) if q[i] == maxQ]
            i = random.choice(best)
        else:
            i = q.index(maxQ)

       action = self.actions[i]
    return action

The problem now is that we even after we’ve explored all the options and we know for sure the best option, we still sometimes choose a random action; the exploration doesn’t turn off. There are a number of ways to overcome this, most involving manually adjusting epsilon as the mouse learns, so that it explores less and less as time passes and it has presumably learned the best actions for the majority of situations. I’m not a big fan of this, so instead I’ve implemented exploration the following way: If the randomly generated value is less than epsilon, then randomly add values to the Q values for this state, scaled by the maximum Q value of this state. In this way, exploration is added, but you’re still using your learned Q values as a basis for choosing your action, rather than just randomly choosing an action completely at random.

    def choose_action(self, state):
        q = [self.getQ(state, a) for a in self.actions]
        maxQ = max(q)

        if random.random()  1:
            best = [i for i in range(len(self.actions)) if q[i] == maxQ]
            i = random.choice(best)
        else:
            i = q.index(maxQ)

        action = self.actions[i]

        return action

Very pleasingly, you get results comparable to the case where you perform lots of learning and then set epsilon = 0 to turn off random movements. Let’s look at them!

Simulation results
So here, the simulation is set up such that there is a mouse, cheese, and a cat all inside a room. The mouse then learns over a number of trials to avoid the cat and get the cheese. Printing out the results after every 1,000 time steps, for the standard learning case where you reduce epsilon as you learn the results are:

10000, e: 0.09, W: 44, L: 778
20000, e: 0.08, W: 39, L: 617
30000, e: 0.07, W: 47, L: 437
40000, e: 0.06, W: 33, L: 376
50000, e: 0.05, W: 35, L: 316
60000, e: 0.04, W: 36, L: 285
70000, e: 0.03, W: 33, L: 255
80000, e: 0.02, W: 31, L: 179
90000, e: 0.01, W: 35, L: 152
100000, e: 0.00, W: 44, L: 130
110000, e: 0.00, W: 28, L: 90
120000, e: 0.00, W: 17, L: 65
130000, e: 0.00, W: 40, L: 117
140000, e: 0.00, W: 56, L: 86
150000, e: 0.00, W: 37, L: 77

For comparison now, here are the results when epsilon is not reduced in the standard exploration case:

10000, e: 0.10, W: 53, L: 836
20000, e: 0.10, W: 69, L: 623
30000, e: 0.10, W: 33, L: 452
40000, e: 0.10, W: 32, L: 408
50000, e: 0.10, W: 57, L: 397
60000, e: 0.10, W: 41, L: 326
70000, e: 0.10, W: 40, L: 317
80000, e: 0.10, W: 47, L: 341
90000, e: 0.10, W: 47, L: 309
100000, e: 0.10, W: 54, L: 251
110000, e: 0.10, W: 35, L: 278
120000, e: 0.10, W: 61, L: 278
130000, e: 0.10, W: 45, L: 295
140000, e: 0.10, W: 67, L: 283
150000, e: 0.10, W: 50, L: 304

As you can see, the performance now converges around 300, instead of 100. Not great.
Now here are the results from the alternative implementation of exploration, where epsilon is held constant:

10000, e: 0.10, W: 65, L: 805
20000, e: 0.10, W: 36, L: 516
30000, e: 0.10, W: 50, L: 417
40000, e: 0.10, W: 38, L: 310
50000, e: 0.10, W: 39, L: 247
60000, e: 0.10, W: 31, L: 225
70000, e: 0.10, W: 23, L: 181
80000, e: 0.10, W: 34, L: 159
90000, e: 0.10, W: 36, L: 137
100000, e: 0.10, W: 35, L: 138
110000, e: 0.10, W: 42, L: 136
120000, e: 0.10, W: 24, L: 99
130000, e: 0.10, W: 52, L: 106
140000, e: 0.10, W: 22, L: 88
150000, e: 0.10, W: 29, L: 101

And again we get that nice convergence to 100, but this time without having to manually modify epsilon. Which is pretty baller.

Code
And that’s it! There is of course lots and lots of other facets of exploration to discuss, but this is just a brief discussion. If you’d like the code for the cat vs mouse and the alternative exploration you can access them at my github: Cat vs mouse – exploration.
To alternate between the different types of exploration, change the import statement at the top of the egoMouseLook.py to be either import qlearn for the standard exploration method, or import qlearn_mod_random as qlearn to test the alternative method. To have epsilon reduce in value as time goes, you can uncomment the lines 142-145.

Tagged , ,
Design a site like this with WordPress.com
Get started