1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344
//
// A rust binding for the GSL library by Guillaume Gomez (guillaume1.gomez@gmail.com)
//
/*!
# Multi dimensional Root-Finding
This chapter describes functions for multidimensional root-finding (solving nonlinear systems with
n equations in n unknowns). The library provides low level components for a variety of iterative
solvers and convergence tests. These can be combined by the user to achieve the desired solution,
with full access to the intermediate steps of the iteration. Each class of methods uses the same
framework, so that you can switch between solvers at runtime without needing to recompile your
program. Each instance of a solver keeps track of its own state, allowing the solvers to be used
in multi-threaded programs. The solvers are based on the original Fortran library MINPACK.
The header file `gsl_multiroots.h` contains prototypes for the multidimensional root finding functions
and related declarations.
## Overview
The problem of multidimensional root finding requires the simultaneous solution of n equations,
f_i, in n variables, x_i,
```text
f_i (x_1, \dots, x_n) = 0 \qquad\hbox{for}~i = 1 \dots n.
```
In general there are no bracketing methods available for n dimensional systems, and no way of
knowing whether any solutions exist. All algorithms proceed from an initial guess using a
variant of the Newton iteration,
```text
x \to x' = x - J^{-1} f(x)
```
where x, f are vector quantities and J is the Jacobian matrix J_{ij} = \partial f_i / \partial x_j.
Additional strategies can be used to enlarge the region of convergence. These include requiring a
decrease in the norm |f| on each step proposed by Newton’s method, or taking steepest-descent
steps in the direction of the negative gradient of |f|.
Several root-finding algorithms are available within a single framework. The user provides a
high-level driver for the algorithms, and the library provides the individual functions necessary
for each of the steps. There are three main phases of the iteration. The steps are,
- initialize solver state, `s`, for algorithm `T`
- update `s` using the iteration `T`
- test `s` for convergence, and repeat iteration if necessary
The evaluation of the Jacobian matrix can be problematic, either because programming the derivatives
is intractable or because computation of the n^2 terms of the matrix becomes too expensive.
For these reasons the algorithms provided by the library are divided into two classes according to
whether the derivatives are available or not.
The state for solvers with an analytic Jacobian matrix is held in a `gsl_multiroot_fdfsolver` struct.
The updating procedure requires both the function and its derivatives to be supplied by the user.
The state for solvers which do not use an analytic Jacobian matrix is held in a
`gsl_multiroot_fsolver` struct. The updating procedure uses only function evaluations (not derivatives).
The algorithms estimate the matrix J or J^{-1} by approximate methods.
!*/
use ffi::FFI;
use sys;
use sys::libc::{c_int, c_void};
ffi_wrapper!(
MultiRootFSolverType,
*const sys::gsl_multiroot_fsolver_type,
"The multiroot algorithms described in this section do not require any derivative information to be
supplied by the user. Any derivatives needed are approximated by finite differences.
Note that if the finite-differencing step size chosen by these routines is inappropriate,
an explicit user-supplied numerical derivative can always be used with
derivative-based algorithms."
);
impl MultiRootFSolverType {
///This is a version of the Hybrid algorithm which replaces calls to the Jacobian function by
/// its finite difference approximation. The finite difference approximation is computed
/// using `gsl_multiroots_fdjac()` with a relative step size of `GSL_SQRT_DBL_EPSILON`.
/// Note that this step size will not be suitable for all problems.
#[doc(alias = "gsl_multiroot_fsolver_hybrids")]
pub fn hybrids() -> MultiRootFSolverType {
ffi_wrap!(gsl_multiroot_fsolver_hybrids)
}
///This is a finite difference version of the Hybrid algorithm without internal scaling.
#[doc(alias = "gsl_multiroot_fsolver_hybrid")]
pub fn hybrid() -> MultiRootFSolverType {
ffi_wrap!(gsl_multiroot_fsolver_hybrid)
}
/// The discrete Newton algorithm is the simplest method of solving a multidimensional
/// system. It uses the Newton iteration
///
///```text
///x \to x - J^{-1} f(x)
///```
///
/// where the Jacobian matrix J is approximated by taking finite differences of the function f.
/// The approximation scheme used by this implementation is,
///
///```text
///J_{ij} = (f_i(x + \delta_j) - f_i(x)) / \delta_j
///```
///
/// where \delta_j is a step of size \sqrt\epsilon |x_j| with \epsilon being the machine
/// precision (\epsilon \approx 2.22 \times 10^{-16}). The order of convergence of Newton’s
/// algorithm is quadratic, but the finite differences require n^2 function evaluations on
/// each iteration. The algorithm may become unstable if the finite differences are not a
/// good approximation to the true derivatives.
#[doc(alias = "gsl_multiroot_fsolver_dnewton")]
pub fn dnewton() -> MultiRootFSolverType {
ffi_wrap!(gsl_multiroot_fsolver_dnewton)
}
/// The Broyden algorithm is a version of the discrete Newton algorithm which attempts to
/// avoids the expensive update of the Jacobian matrix on each iteration. The changes to
/// the Jacobian are also approximated, using a rank-1 update,
///
///```text
///J^{-1} \to J^{-1} - (J^{-1} df - dx) dx^T J^{-1} / dx^T J^{-1} df
///```
///
/// where the vectors dx and df are the changes in x and f. On the first iteration the inverse
/// Jacobian is estimated using finite differences, as in the discrete Newton algorithm.
///
/// This approximation gives a fast update but is unreliable if the changes are not small, and
/// the estimate of the inverse Jacobian becomes worse as time passes. The algorithm has a
/// tendency to become unstable unless it starts close to the root. The Jacobian is refreshed
/// if this instability is detected (consult the source for details).
///
/// This algorithm is included only for demonstration purposes, and is not recommended for
/// serious use.
#[doc(alias = "gsl_multiroot_fsolver_broyden")]
pub fn broyden() -> MultiRootFSolverType {
ffi_wrap!(gsl_multiroot_fsolver_broyden)
}
}
ffi_wrapper!(
MultiRootFSolver<'a>,
*mut sys::gsl_multiroot_fsolver,
gsl_multiroot_fsolver_free
;inner_call: sys::gsl_multiroot_function_struct => sys::gsl_multiroot_function_struct{ f: None, n: 0, params: std::ptr::null_mut() };
;inner_closure: Option<Box<dyn Fn(&::VectorF64, &mut ::VectorF64) -> ::Value + 'a>> => None;,
"This is a workspace for multidimensional root-finding without derivatives."
);
impl<'a> MultiRootFSolver<'a> {
/// This function returns a pointer to a newly allocated instance of a solver of type `T` with
/// `n` unknowns.
///
/// If there is insufficient memory to create the solver then the function returns a null
/// pointer and the error handler is invoked with an error code of `Value::NoMemory`.
#[doc(alias = "gsl_multiroot_fsolver_alloc")]
pub fn new(t: &MultiRootFSolverType, n: usize) -> Option<MultiRootFSolver<'a>> {
let ptr = unsafe { sys::gsl_multiroot_fsolver_alloc(t.unwrap_shared(), n) };
if ptr.is_null() {
None
} else {
Some(MultiRootFSolver::wrap(ptr))
}
}
/// This function initializes, or reinitializes, an existing solver `s` to use the multi
/// function `f` with `n` unknowns.
#[doc(alias = "gsl_multiroot_fsolver_set")]
pub fn set<F: Fn(&::VectorF64, &mut ::VectorF64) -> ::Value + 'a>(
&mut self,
f: F,
n: usize,
x: &::VectorF64,
) -> ::Value {
unsafe extern "C" fn inner_f<A: Fn(&::VectorF64, &mut ::VectorF64) -> ::Value>(
x: *const sys::gsl_vector,
params: *mut c_void,
f: *mut sys::gsl_vector,
) -> c_int {
let g: &A = &*(params as *const A);
let x_new = ::VectorF64::soft_wrap(x as *const _ as *mut _);
::Value::into(g(&x_new, &mut ::VectorF64::soft_wrap(f)))
}
self.inner_call = sys::gsl_multiroot_function_struct {
f: Some(inner_f::<F>),
n,
params: &f as *const _ as *mut _,
};
self.inner_closure = Some(Box::new(f));
::Value::from(unsafe {
sys::gsl_multiroot_fsolver_set(
self.unwrap_unique(),
&mut self.inner_call,
x.unwrap_shared(),
)
})
}
/// This function performs a single iteration of the minimizer s. If the iteration encounters an
/// unexpected problem then an error code will be returned,
///
/// ::Value::BadFunc
/// the iteration encountered a singular point where the function evaluated to Inf or NaN.
///
/// ::Value::Failure
/// the algorithm could not improve the current best approximation or bounding interval.
///
/// The minimizer maintains a current best estimate of the position of the minimum at all times,
/// and the current interval bounding the minimum. This information can be accessed with the
/// following auxiliary functions,
#[doc(alias = "gsl_multiroot_fsolver_iterate")]
pub fn iterate(&mut self) -> ::Value {
::Value::from(unsafe { sys::gsl_multiroot_fsolver_iterate(self.unwrap_unique()) })
}
/// This function returns the current estimate of the root for the solver `s`, given by `s->x`.
#[doc(alias = "gsl_multiroot_fsolver_root")]
pub fn root(&self) -> ::VectorF64 {
::VectorF64::soft_wrap(unsafe { sys::gsl_multiroot_fsolver_root(self.unwrap_shared()) })
}
/// This function returns the last step `dx` taken by the solver `s`, given by `s->dx`.
#[doc(alias = "gsl_multiroot_fsolver_dx")]
pub fn dx(&self) -> ::VectorF64 {
::VectorF64::soft_wrap(unsafe { sys::gsl_multiroot_fsolver_dx(self.unwrap_shared()) })
}
/// This function returns the function value `f(x)` at the current estimate of the root for
/// the solver `s`, given by `s->f`.
#[doc(alias = "gsl_multiroot_fsolver_f")]
pub fn f(&self) -> ::VectorF64 {
::VectorF64::soft_wrap(unsafe { sys::gsl_multiroot_fsolver_f(self.unwrap_shared()) })
}
}
#[cfg(any(test, doctest))]
mod tests {
/// This doc block will be used to ensure that the closure can't be set everywhere!
///
/// ```compile_fail
/// use rgsl::*;
/// use rgsl::types::multiroot::{MultiRootFSolver, MultiRootFSolverType};
///
/// fn set(root: &mut MultiRootFSolver) {
/// let dummy = "lalal".to_owned();
/// root.set(|x, y| {
/// println!("==> {:?}", dummy);
/// y.set(0, 1.0 - x.get(0));
/// y.set(1, x.get(0) - x.get(1));
/// rgsl::Value::Success}, 2, &rgsl::VectorF64::from_slice(&[-10.0, 1.0]).unwrap());
/// }
///
/// let mut root = MultiRootFSolver::new(&MultiRootFSolverType::hybrid(), 2).unwrap();
/// set(&mut root);
/// let status = root.iterate();
/// ```
///
/// Same but a working version:
///
/// ```
/// use rgsl::types::multiroot::{MultiRootFSolver, MultiRootFSolverType};
/// use rgsl::*;
///
/// fn set(root: &mut MultiRootFSolver) {
/// root.set(|x, y| {
/// y.set(0, 1.0 - x.get(0));
/// y.set(1, x.get(0) - x.get(1));
/// rgsl::Value::Success}, 2, &rgsl::VectorF64::from_slice(&[-10.0, 1.0]).unwrap());
/// }
///
/// let mut root = MultiRootFSolver::new(&MultiRootFSolverType::hybrid(), 2).unwrap();
/// set(&mut root);
/// let status = root.iterate();
/// ```
///
use super::*;
use multiroot::test_residual;
use VectorF64;
/// checking a test function
/// must return a success criteria (or failure)
fn rosenbrock_f(x: &VectorF64, f: &mut VectorF64) -> ::Value {
f.set(0, 1.0 - x.get(0));
f.set(1, x.get(0) - x.get(1).powf(2.0));
::Value::Success
}
fn print_state(solver: &mut MultiRootFSolver, iteration: usize) {
let f = solver.f();
let x = solver.root();
println!(
"iter: {}, f = [{:+.2e}, {:+.2e}], x = [{:+.5}, {:+.5}]",
iteration,
f.get(0),
f.get(1),
x.get(0),
x.get(1)
)
}
#[test]
fn test_multiroot_fsolver() {
// setup workspace
let mut multi_root = MultiRootFSolver::new(&MultiRootFSolverType::hybrid(), 2).unwrap();
let array_size: usize = 2;
let guess_value = VectorF64::from_slice(&[-10.0, -5.0]).unwrap();
multi_root.set(&rosenbrock_f, array_size, &guess_value);
// iteration counters
let max_iter: usize = 100;
let mut iter = 0;
// convergence checks
let mut status = ::Value::Continue;
let epsabs = 1e-6;
print_state(&mut multi_root, 0);
while matches!(status, ::Value::Continue) && iter < max_iter {
// iterate solver
status = multi_root.iterate();
// check iteration for failure (e.g. BadEquation (f set to inf or NaN) or
// ENoProg (not or bad progress).
if !matches!(status, ::Value::Success) {
break;
}
// print current iteration
print_state(&mut multi_root, iter);
// test for convergence
let f_value = multi_root.f();
status = test_residual(&f_value, epsabs);
// check if iteration succeeded
if matches!(status, ::Value::Success) {
println!("Converged");
}
iter += 1;
}
assert!(matches!(status, ::Value::Success))
}
}