Example for using ADiMat

Consider a very simple function f which computes the polynomial of x with coefficients given by the vector c as shown in the following MATLAB code:

function r = f(x, c)
  r = 0;
  powerOfX = 1;
  for i=1:length(c)
    r = r + c(i) * powerOfX;
    powerOfX = powerOfX * x;
  end

This function may for example be run to evaluate a matrix polynomial, as shown in the following piece of code.

>> A = magic(2);
>> c = [ 2 3 4 5];
>> f(A, c)

ans =

   302   332
   442   412

Now, assume that one is interested in the derivative of the function f with respect to x and c. That is, the input variables x and c are selected as the independent variables. The corresponding call of ADiMat which transforms f in forward mode of AD into the differentiated function g_f and evaluates the first order derivative (Jacobian matrix) from within MATLAB is given by:

>> Jac = admDiffFor(@f, 1, A, c);

ans =

   146    72    96    60     1     1    13    49
    96   170    80   116     1     4    12    76
    72    45   170    87     1     3     9    57
    60    87   116   199     1     2    16    68

Now, the variable Jac contains the derivatives of f w.r.t. x and c. Also, the file g_f.m has been generated.

An alternative implementation of the forward mode is available through the driver function admDiffVFor, which transforms the code to another function d_f.m.

ADiMat also features an implementation of the reverse mode of AD, which is available through the function admDiffRev:

>> Jac = admDiffRev(@f, 1, A, c);

For comparison with the derivatives computed with AD, and for the convenience of the user, ADiMat also provides functions to compute derivatives via the finite differences approximation and the complex variable method:

>> Jac = admDiffFD(@f, 1, A, c);
>> Jac = admDiffComplex(@f, 1, A, c);

These two latter methods do not require the transformation of the source code, but they only work under certain conditions. The complex variable method yields precise derivatives for real analytic functions, while the finite differences approximation can at best provide derivatives that are exact up to half the machine precision.

You can tell ADiMat to use a so-called seed matrix S to be implicitly multiplied with the Jacobian. In forward mode, ADiMat implicitly computes J*S and in reverse mode ADiMat computes S*J. In this example we specify seed vectors of all ones. This will compute the column sum of the columns of the Jacobian in forward mode and the row sum of the Jacobian in reverse mode.

>> seedFor = ones(numel(A) + numel(c), 1);
>> Jac = admDiffFor(@f, seedFor, A, c);
>> seedRev = ones(1, numel(A), 1);
>> Jac = admDiffFor(@f, seedRev, A, c);

Specifying a vector (or a “thin” matrix) as the seed matrix is more efficient than computing the full Jacobian and then multiplying by hand, but the result is the same!

ADiMat can also transparently use compressed seeding techniques of AD to compute sparse Jacobians. This requires that the non-zero pattern of the Jacobian is known. Instead of specifying a seed matrix a coloring function must be given as the second argument, while the non-zero pattern is passed to ADiMat via an options structure:

>> opts = admOptions();
>> opts.jac_nzpattern = spones(Jac ~= 0);
>> Jac = admDiffFor(@f, @cpr, A, c, opts);
>> Jac = admDiffRev(@f, @cpr, A, c, opts);

This compression technique also works with admDiffFD and admDiffComplex.

In this example we use the expression Jac ~= 0 to obtain the non-zero bit pattern of Jac. This may be wrong in some cases, as entries may be zero just by coincidence, but become non-zero when the Jacobian is evaluated at another argument.

For more information, please refer to the help of the above commands or take a look at the documentation of ADiMat.