Calculates least squares estimates using Newton Raphson method multiple-sample case
[q, info] = nrregr (func, p, varargin)
Calculates least squares estimates using Newton Raphson method multiple-sample case Finds parameter values in (non-linear) weighted least-squares regression problems using the Newton Raphson method, with numerically obtained values for the jacobian. It can deal with an arbitrary number of samples, which might share one or more parameters.
- func: string with name of user-defined function
f = func (p, xyw) with p: k-vector with parameters; xyw: (n,c)-matrix; f: n-vector [f1, f2, ...] = func (p, xyw1, xyw2, ...) with p: k-vector and xywi: (ni,k)-matrix; fi: ni-vector with model predictions The dependent variable in the output f; For xyw see below.
- p: (k,2) matrix with
p(:,1) initial guesses for parameter values p(:,2) binaries with yes (1) or no (0) iteration (optional)
- xyzi (read as xyw1, xyw2, .. ): (ni,3) matrix with
xywi(:,1) independent variable xywi(:,2) dependent variable xywi(:,3) weight coefficients (optional) xywi(:,>3) data-pont specific information data (optional) The number of data matrices xyw1, xyw2, ... is optional but > 0
- q: matrix like p, but with least squares estimates
- info: 1 if convergence has been successful; 0 otherwise
The convergence is usually fast, but the domain of attraction can be small, depending on data and model. See nmregr for simplex method, garegr for genetic algorithm, and nrregr2, nmregr2 and garegr2 for 2 independent variables. See nmvcregr for standard deviations proportional to the mean. Calls nrdregr, and user-defined function 'func'. Set options with nrregr_options The iteration is terminated if the norm, i.e. the sum of squared derivetives of sum of squared deviations with respect to the iterated parameters, is less than the maximum norm or if the number of iterations exceeds a maximum values (see nrregr_options).
Fill data matrix, model definition and parameter values in a script-file. See mydata_regr.m for an example. If the data file is large, the parameter setting can best be done in a separate script file, since you probably want to change them many times before the result satisfies. Optimization and plot options can best be set in the script file that contains the data. Be aware of the fact that the result is not always what you expect it to be (for instance due to local minima). Check result graphically using routine shregr. Check logic of parameter values and check standard deviations, using routine pregr. If you do not trust the result, try other initial estimates. Use shregr to check that your initial parameter estimates make sense Since the output resembles the input for the parameters, continuation is easy. Conversion is less easy for inceasing dimensionality. It, therefore, helps to fixed parameters initially for which their approximate value is rather certain, and only iterate those for which the values is less certain Use the default option report = 1. The norm values can first increase and then decrease, but the sum of squared deviations should always decrease. The approach to zero of the norm values should be rather fast, once in the neighbourhood of the proper minimum Far away from the proper minimum, it might help to restrict the maximum step size. You can also to fix some rather well-known parameters first, the find appropriate values for less known parameters, and then release all parameter values Use weight coefficient 0 if you want no effect of that data-point on the parameter estimates, but still want to show that data-point in the plot (see shregr). If the coefficient of variation is approximately constant, weight coefficients inverse to the squared values of the dependent variable is an attractive alternative. There is no need to use the value of the independent variable in the user-defined function. This allows the implementation of one or more sloppy constraints on the parameter values. You define an extra dataset with one row, for instance xyw2 = [arbitrary_number value weight], and a second output of a function of parameter values that should not differ too much from value. If you give it a high weight, the resulting parameters will be such that the function of the parameters is close to value