I came up with a solution exactly at 4:20. Your specific case is a lucky one because the term that depends on the independent variable $y$ affects linearly the non-linear exponentials that depend on $x$. So the problem remains linear and you can still apply the method I proposed in this other answer (with some minor changes).
First you have to store your data $z(x,y)$ in a (tall) column vector, putting each $y$ sample head-to-tail with each other. Assuming you have $nx$ number of samples in $x$, and $ny$ number of samples in $y$:
$$ z = \left[ \begin{array}{c} z(x, y_1) \\ z(x, y_2) \\ \vdots \\ z(x, y_{ny}) \end{array} \right] $$
Now we need to equate the right hand side of the equation according to the proposed method. For all $ny$ samples in $y$, the exponental terms ("lambdas") are the same, but the terms that multiply $x$ and the constant are not, so the matrix should have the form:
$$ Z = \left[ \begin{array}{c} \int{z(x, y_1)} & \int^2{z(x, y_1)} & x & 1 & 0 & 0 & \dots & 0 & 0 \\ \int{z(x, y_2)} & \int^2{z(x, y_2)} & 0 & 0 & x & 1 & \dots & 0 & 0 \\ \vdots \\ \int{z(x, y_{ny})} & \int^2{z(x, y_{ny})} & 0 & 0 & 0 & 0 & \dots & x & 1 \end{array} \right] $$
So now we have:
$$ z = Z A $$
So:
$$ A = (Z^T Z)^{-1}Z^T z $$
$$ \lambda = eig\left( \left[ \begin{array}{c} A(1) & A(2) \\ 1 & 0 \end{array} \right] \right) $$
The rest follows by the proposed method. The respective Python code is:
# Import PyArma
from pyarma import *
# Generate data to fit
# Independent variable x
dx = 0.02
x = regspace(dx, dx, 1.5)
nx = x.n_elem
# Independent variable y
dy = 0.05
y = regspace(dy, dy, 1.5)
ny = y.n_elem
# Dependent variable z(x,y)
# Stored in column vector head-to-tail for each y[k]
z = zeros(nx*ny, 1)
for k in range(0, ny):
z_start = nx*k
z_end = nx*(k+1)-1
z[z_start:z_end,:] = 5*exp(0.5*x)*y[k] + 4*exp(-3*x)
# Compute exponentials lambdas
def cumtrapz(x, y):
return join_vert(mat(1,1), 0.5 * cumsum(diff(x) @ (y[1:y.n_elem-1,0] + y[0:y.n_elem-2,0])))
def join_horizontal(*args):
if type(args[0]) is tuple:
args = args[0]
if len(args) == 1:
return args[0]
else:
return join_horiz(args[0], join_horizontal(args[1:len(args)]))
# Build least squares matrix Z to obtain the lambdas
Z = zeros(nx*ny, 2+2*ny)
for k in range(0, ny):
z_start = nx*k
z_end = nx*(k+1)-1
# calculate integrals of z(y[k]) wrt x, for each kth sample in y
iz1 = cumtrapz(x, z[z_start:z_end,:])
iz2 = cumtrapz(x, iz1)
# store in appropriate location
Z[z_start:z_end, 0:1] = join_horizontal(iz1, iz2)
Z[z_start:z_end, 2+2*k:3+2*k] = join_horizontal(x, ones(size(x)))
# Solve the least squares and eigenvalue problems
A = pinv(Z)*z
lambdas = eig_gen(mat([
[A[0], A[1]],
[1, 0]
]))[0]
lambdas = real(lambdas)
lambdas.print("lambdas = ")
#lambdas =
# -2.9991
# 0.5000
# Compute exponentials multipliers
# Build least squares matrix X and solve the least squares problem to obtain the multipliers
X = zeros(nx*ny, 2)
for k in range(0, ny):
x_start = nx*k
x_end = nx*(k+1)-1
X[x_start:x_end, :] = join_horizontal(exp(lambdas[1]*x)*y[k], exp(lambdas[0]*x))
# solve the least squares problem
P = pinv(X)*z;
P = real(P)
P.print("P = ")
#P =
# 4.9999
# 3.9995
You can try it online here:
http://juangburgos.github.io/FitSumExponentials/
Please upvote my proposed method in the other question if you found it useful.
Edit:
@PayasVartak, the lambdas computation using the eigenvalues is indeed the optimal solution for the two lambdas (there only two values). The reason why you you might not be getting the best fit, might be because of the second least squares problem, in the "Estimate exponentials multipliers" section of the code.
Note that I used the equation:
join_horizontal( exp(lambdas[1]*x)*y[k], exp(lambdas[0]*x) )
for the fit, because I already knew in advance which exponential gets multiplied by y(k). But actually we don't know at this point which of the two exp(lambda(?)*x) is the one that gets multiplied by y(k). So actually you need to try also:
join_horizontal( exp(lambdas[1]*x), exp(lambdas[0]*x)*y[k] )
And see which one makes the best fit.
As @JJacquelin proposed, it might be a good idea to post some of your experimental data so we can help you better.