1 Introduction
    1.1 What is ALGLIB
    1.2 ALGLIB license
    1.3 Documentation license
    1.4 Reference Manual and User Guide
    1.5 Acknowledgements
2 ALGLIB structure
    2.1 Packages
    2.2 Subpackages
    2.3 Open Source and Commercial versions
3 Compatibility
    3.1 CPU
    3.2 OS
    3.3 Compiler
    3.4 Optimization settings
4 Compiling ALGLIB
    4.1 Adding to your project
    4.2 Configuring for your compiler
    4.3 Utilizing SIMD
        4.3.1 Overview
        4.3.2 Achieving best performance
        4.3.3 Achieving best portability
        4.3.4 Disabling SIMD instruction sets at compile time
    4.4 Utilizing SMP
    4.5 Examples (free and commercial editions)
        4.5.1 Introduction
        4.5.2 Compiling under Windows
        4.5.3 Compiling under Linux
5 Using ALGLIB
    5.1 Thread-safety
    5.2 Global definitions
    5.3 Datatypes
    5.4 Constants
    5.5 Functions
    5.6 Working with vectors and matrices
    5.7 Using functions: 'expert' and 'friendly' interfaces
    5.8 Handling errors
    5.9 Working with Level 1 BLAS functions
    5.10 Reading data from CSV files
6 Working with commercial version
    6.1 Benefits of commercial version
    6.2 Working with SIMD support (Intel/AMD users)
    6.3 Using multithreading
        6.3.1 General information
        6.3.2 Compiling ALGLIB in the multithreaded mode
        6.3.3 Two kinds of parallelism
        6.3.4 Activating parallelism
        6.3.5 Controlling cores count
        6.3.6 More on parallel callbacks
        6.3.7 SMT (CMT/hyper-threading) issues
    6.4 Linking with Intel MKL
        6.4.1 Using lightweight Intel MKL supplied by ALGLIB Project
        6.4.2 Using your own installation of Intel MKL
7 Advanced topics
    7.1 Using Red Zones to find memory access violations
    7.2 Replacing stdlib rand() as an entropy source with OpenSSL implementation
    7.3 Exception-free mode
    7.4 Partial compilation
    7.5 Testing ALGLIB
8 ALGLIB packages and subpackages
    8.1 AlglibMisc package
    8.2 DataAnalysis package
    8.3 DiffEquations package
    8.4 FastTransforms package
    8.5 Integration package
    8.6 Interpolation package
    8.7 LinAlg package
    8.8 Optimization package
    8.9 Solvers package
    8.10 SpecialFunctions package
    8.11 Statistics package

1 Introduction

1.1 What is ALGLIB

ALGLIB is a cross-platform numerical analysis and data mining library. It supports several programming languages (C++, C#, Java, Delphi, VB.NET, Python) and several operating systems (Windows, *nix family).

ALGLIB features include:

ALGLIB Project (the company behind ALGLIB) delivers to you several editions of ALGLIB:

Free Edition is a serial version without multithreading support and with a limited set of SIMD optimizations. Commercial Edition is a heavily optimized version of ALGLIB. It supports multithreading, it is extensively optimized, and (on Intel platforms) - our commercial users may enjoy a precompiled version of ALGLIB which internally calls Intel MKL to accelerate low-level tasks. We obtained a license from Intel corp., which allows us to integrate Intel MKL into ALGLIB, so you don't have to get a separate license from Intel.

1.2 ALGLIB license

ALGLIB Free Edition is distributed under license which favors non-commercial usage, but is not well suited for commercial applications:

ALGLIB Commercial Edition is distributed under license which is friendly to commercial users. A copy of the commercial license can be found at http://www.alglib.net/commercial.php.

1.3 Documentation license

This reference manual is licensed under BSD-like documentation license:

Copyright 1994-2017 Sergey Bochkanov, ALGLIB Project. All rights reserved.

Redistribution and use of this document (ALGLIB Reference Manual) with or without modification, are permitted provided that such redistributions will retain the above copyright notice, this condition and the following disclaimer as the first (or last) lines of this file.

THIS DOCUMENTATION IS PROVIDED BY THE ALGLIB PROJECT "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE ALGLIB PROJECT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

1.4 Reference Manual and User Guide

ALGLIB Project provides two sources of information: ALGLIB Reference Manual (this document) and ALGLIB User Guide.

ALGLIB Reference Manual contains full description of all publicly accessible ALGLIB units accompanied with examples. Reference Manual is focused on the source code: it documents units, functions, structures and so on. If you want to know what unit YYY can do or what subroutines unit ZZZ contains, the Reference Manual is a place to go. Free software needs free documentation - that's why the ALGLIB Reference Manual is licensed under BSD-like documentation license.

Additionally to the Reference Manual we provide you User Guide. User Guide is focused on more general questions: how fast ALGLIB is? how reliable is it? what are the strong and weak sides of the algorithms used? We aim to make ALGLIB User Guide an important source of information both about ALGLIB and numerical analysis algorithms in general. We want it to be a book about algorithms, not just software documentation. And we want it to be unique - that's why the ALGLIB User Guide is distributed under a less-permissive personal-use-only license.

1.5 Acknowledgements

ALGLIB was not possible without contribution of the following open source projects:

We also want to thank developers of the Intel's evelopment center for their help during MKL integration.

2 ALGLIB structure

2.1 Packages

ALGLIB is a C++ interface to the computational core written in C. Both C library and C++ wrapper are automatically generated by code generation tools developed within ALGLIB project. ALGLIB includes 11 packages and 5 support units:

The packages are:

The support units are:

One package may rely on other ones, but we have tried to reduce number of dependencies. Every package relies on ap.cpp and many packages rely on alglibinternal.cpp and kernels. But many packages require only these five to work, and many other packages need significantly less than 13 packages. For example, statistics.cpp requires five files mentioned above and only one additional package - specialfunctions.cpp.

2.2 Subpackages

There is one more concept to learn - subpackages. If you look at the list of ALGLIB packages, you will see that each package includes several subpackages. For example, linalg.cpp includes trfac, svd, evd and other subpackages. These subpackages do no exist as separate files, namespaces or other entities. They are just subsets of one large unit which provide significantly different functionality. They have separate documentation sections, but if you want to use svd subpackage, you have to include linalg.h, not svd.h.

2.3 Open Source and Commercial versions

ALGLIB comes in two versions - open source (GPL-licensed) and commercial (closed source) one. Both versions have same functionality, i.e. may solve same set of problems. However, commercial version differs from the open source one in following aspects:

This documentation applies to both versions of ALGLIB. Detailed description of commercial version can be found below.

3 Compatibility

3.1 CPU

ALGLIB is compatible with any CPU which:

Most mainstream CPU's (in particular, x86, x86_64, ARM) satisfy these requirements.

As for Intel architecture, ALGLIB works with both FPU-based and SIMD-based implementations of the floating point math.

3.2 OS

ALGLIB for C++ (both open source and commercial versions) can be compiled in OS-agnostic mode (no OS-specific preprocessor definitions), when it is compatible with any OS which supports C++98 standard library. In particular, it will work under any POSIX-compatible OS and under Windows.

If you want to use multithreaded capabilities of the commercial version of ALGLIB, you should compile it in OS-specific mode by #defining either AE_OS=AE_WINDOWS or AE_OS=AE_POSIX at compile time, depending on OS being used. Former corresponds to any modern OS (32/64-bit Windows XP and later) from Windows family, while latter means almost any POSIX-compatible OS. It applies only to commercial version of ALGLIB. Open source version is always OS-agnostic, even in the presence of OS-specific definitions.

3.3 Compiler

ALGLIB is compatible with any C++ compiler which:

All modern compilers (MSVC, GCC, Clang, ICC) satisfy these requirements.

However, some very old compilers (ten years old version of Borland C++ Builder, for example) may emit code which does not correctly work with IEEE special values. If you use one of these old compilers, we recommend you to run ALGLIB test suite to ensure that library works.

3.4 Optimization settings

ALGLIB is compatible with any kind of optimizing compiler as long as:

Generally, all kinds of optimization that were marked by compiler vendor as "safe" are possible. For example, ALGLIB can be compiled:

From the other side, following "unsafe" optimizations will break ALGLIB:

4 Compiling ALGLIB

4.1 Adding to your project

Adding ALGLIB to your project is easy - just pick packages you need and... add them to your project! Under most used compilers (GCC, MSVC) it will work without any additional settings. In other cases you will need to define several preprocessor definitions (this topic will be detailed below), but everything will still be simple.

By "adding to your project" we mean that you should a) compile .cpp files with the rest of your project, and b) include .h files you need. Do not include .cpp files - these files must be compiled separately, not as part of some larger source file. The only files you should include are .h files, stored in the /src folder of the ALGLIB distribution.

As you see, ALGLIB has no project files or makefiles. Why? There are several reasons:

In any case, compiling ALGLIB is so simple that even without project file you can do it in several minutes.

4.2 Configuring for your compiler

If you use modern versions of MSVC, GCC, Clang or ICC, you don't need to configure ALGLIB at all. But if you use outdated versions of these compilers (or something else), then you may need to tune definitions of several data types:

ALGLIB tries to autodetect your compiler and to define these types in compiler-specific manner:

In most cases, it is enough. But if anything goes wrong, you have several options:

4.3 Utilizing SIMD

4.3.1 Overview

ALGLIB has two-layered structure: some set of basic performance-critical primitives is implemented using optimized code (kernels_sse2.cpp, kernels_avx2.cpp and kernels_fma.cpp), and the rest of the library is built on top of these primitives. By default, ALGLIB uses generic C code to implement these primitives (matrix multiplication, decompositions, etc.). This code works on any CPU architecture that has IEEE-754 floating point support (including exotic DSPs).

However, much better performance may be achieved by utilizing SIMD-capable high-performance kernels included into the ALGLIB. These kernels are implemented using C/C++ SIMD intrinsics (as opposed to the assembly language) and provide good performance combined with benefits of using high-level language like C/C++.

As of the ALGLIB version 3.18, same set of SIMD kernels is included in both Free and Commercial editions.

Commercial Edition may be linked with Intel MKL which replaces some of ALGLIB's own SIMD kernels. Intel MKL is written in the assembly language and has much better performance than the C/C++ SIMD code included into the ALGLIB. This feature is absent in the Free Edition.

Presently ALGLIB supports only x86/x64 SIMD, including SSE2, AVX2 and FMA instruction sets. This support can be activated by compiling ALGLIB with AE_CPU=AE_INTEL macro symbol being #defined at the global level. Sections below discuss how to compile ALGLIB in order to get the fastest executable possible or, alternatively, to get the most portable one.

4.3.2 Achieving best performance

Obviously, the fastest executable can be obtained by compiling entire ALGLIB (both low level kernels and the code relying on these kernels) using the best SIMD instruction set available, e.g. FMA. In this case compiler may use SIMD to accelerate both SIMD-capable kernels and generic C code, e.g. use SIMD registers to pass parameters to function, perform quick memcpy() and so on.

In order to do so, two steps are necessary:

The former tells ALGLIB that it may use C/C++ SIMD intrinsics to accelerate computations. Depending on the compiler being used, the latter step can be done as follows:

The downside of this approach is that your entire application becomes less portable. Of course, ALGLIB includes a CPU dispatcher which checks at runtime which SIMD instruction sets are available at your CPU. If your CPU lacks FMA, ALGLIB won't call functions from kernels_fma.cpp. Similarly, if you don't have AVX2, kernels_avx2.cpp will be ignored. The problem is that because entire ALGLIB was compiled with SIMD support, a compiler may insert SIMD instructions on its own to accelerate even generic C parts.

The point above applies in some degree to all modern compilers, although the behavior may change from version to version and may depend on the optimization settings. In some cases (usually with optimization turned off) compilers generate completely portable code even in the presence of SIMD support. As a counterexample, some MSVC versions are known to replace generic C code like a+b*c by the fused multiply-adds when the code is compiled with /arch:AVX2 - which, by the way, violates the language specification.

4.3.3 Achieving best portability

If one needs a SIMD-capable application with portability guarantees, one has to:

ALGLIB guarantees that it won't call functions from kernels_fma.cpp without checking for FMA support first, won't access kernels_avx2.cpp without checking for AVX2 and so on. Thus, when the library is compiled this way, it will have guaranteed portability across x86/x64 systems.

4.3.4 Disabling SIMD instruction sets at compile time

By default, ALGLIB includes SIMD kernels for all x64 SIMD instruction sets present: SSE2, AVX2 and FMA. Of course, it has a CPU dispatcher which detects CPU support for SIMD and will choose proper SIMD kernel or generic C fallback. However, in some cases SIMD instruction sets should be deactivated at compile time. Most often it happens when one has to compile ALGLIB using some really outdated compiler which does not support post-SSE SIMD intrinsics.

It is possible to selectively disable a SIMD instruction set of your choice by combining AE_CPU=AE_INTEL #definition with one of the following #defines:

4.4 Utilizing SMP

If you want to use multithreaded capabilities of the Commercial Edition, you should compile it in the OS-specific mode by #defining either AE_OS=AE_WINDOWS, AE_OS=AE_POSIX or AE_OS=AE_LINUX (POSIX with Linux-specific extensions) at compile time, depending on the OS being used. The former corresponds to any modern OS (32/64-bit Windows XP and later) from Windows family, while the latter means almost any POSIX-compatible OS (or any OS from the Linux family). Only threading-related functions are used by ALGLIB: pthreads under POSIX systems, threading subset of WinAPI under Windows.

The paragraph above applies only to the Commercial Edition. The open source version is always OS-agnostic (does not use functions beyond C/C++ standard library) even in the presence of OS-specific definitions.

4.5 Examples (free and commercial editions)

4.5.1 Introduction

In this section we'll consider different compilation scenarios for free and commercial versions of ALGLIB - from simple platform-agnostic compilation to compiling/linking with MKL extensions.

We assume that you unpacked ALGLIB distribution in the current directory and saved here demo.cpp file, whose code is given below. Thus, in the current directory you should have exactly one file (demo.cpp) and exactly one subdirectory (alglib-cpp folder with ALGLIB distribution).

4.5.2 Compiling under Windows

File listing below contains the very basic program which uses ALGLIB to perform matrix-matrix multiplication. After that program evaluates performance of GEMM (function being called) and prints result to console. We'll show how performance of this program continually increases as we add more and more sophisticated compiler options.

demo.cpp (WINDOWS EXAMPLE)
#include <stdio.h>
#include <windows.h>
#include "LinAlg.h"

double counter()
{
    return 0.001*GetTickCount();
}

int main()
{
    alglib::real_2d_array a, b, c;
    int n = 2000;
    int i, j;
    double timeneeded, flops;
    
    // Initialize arrays
    a.setlength(n, n);
    b.setlength(n, n);
    c.setlength(n, n);
    for(i=0; i<n; i++)
        for(j=0; j<n; j++)
        {
            a[i][j] = alglib::randomreal()-0.5;
            b[i][j] = alglib::randomreal()-0.5;
            c[i][j] = 0.0;
        }
    
    // Set global threading settings (applied to all ALGLIB functions);
    // default is to perform serial computations, unless parallel execution
    // is activated. Parallel execution tries to utilize all cores; this
    // behavior can be changed with alglib::setnworkers() call.
    alglib::setglobalthreading(alglib::parallel);
    
    // Perform matrix-matrix product.
    flops = 2*pow((double)n, (double)3);
    timeneeded = counter();
    alglib::rmatrixgemm(
        n, n, n,
        1.0,
        a, 0, 0, 0,
        b, 0, 0, 1,
        0.0,
        c, 0, 0);
    timeneeded = counter()-timeneeded;
    
    // Evaluate performance
    printf("Performance is %.1f GFLOPS\n", (double)(1.0E-9*flops/timeneeded));
    
    return 0;
}

Examples below cover Windows compilation from command line with MSVC. It is very straightforward to adapt them to compilation from MSVC IDE - or to another compilers. We assume that you already called %VCINSTALLDIR%\bin\amd64\vcvars64.bat batch file which loads 64-bit build environment (or its 32-bit counterpart). We also assume that current directory is clean before example is executed (i.e. it has ONLY demo.cpp file and alglib-cpp folder). We used 3.2 GHz 4-core CPU for this test.

First example covers platform-agnostic compilation without optimization settings - the most simple way to compile ALGLIB. This step is same in both open source and commercial editions. However, in platform-agnostic mode ALGLIB is unable to use all performance related features present in commercial edition.

We starts from copying all cpp and h files to current directory, then we will compile them along with demo.cpp. In this and following examples we will omit compiler output for the sake of simplicity.

OS-agnostic mode, no compiler optimizations
> copy alglib-cpp\src\*.* .
> cl /I. /EHsc /Fedemo.exe *.cpp
> demo.exe
Performance is 0.7 GFLOPS

Well, 0.7 GFLOPS is not very impressing for a 3.2GHz CPU... Let's add /Ox to compiler parameters.

OS-agnostic mode, /Ox optimization
> cl /I. /EHsc /Fedemo.exe /Ox *.cpp
> demo.exe
Performance is 0.9 GFLOPS

Still not impressed. Let's turn on optimizations for x86 architecture: define AE_CPU=AE_INTEL, add /arch:AVX2. This option provides some speed-up in both free and commercial editions of ALGLIB.

OS-agnostic mode, ALGLIB knows it is x86/x64
> cl /I. /EHsc /Fedemo.exe /Ox /DAE_CPU=AE_INTEL /arch:AVX2 *.cpp
> demo.exe
Performance is 4.5 GFLOPS

It is good, but we have 4 cores - and only one of them was used. Defining AE_OS=AE_WINDOWS allows ALGLIB to use Windows threads to parallelize execution of some functions. Starting from this moment, our example applies only to Commercial Edition.

ALGLIB knows it is Windows on x86/x64 CPU (COMMERCIAL EDITION)
> cl /I. /EHsc /Fedemo.exe /Ox /DAE_CPU=AE_INTEL /arch:AVX2 /DAE_OS=AE_WINDOWS *.cpp
> demo.exe
Performance is 16.0 GFLOPS

Not bad. And now we are ready to the final test - linking with MKL extensions.

Linking with MKL extensions differs a bit from standard way of linking with ALGLIB. ALGLIB itself is compiled with one more preprocessor definition: we define AE_MKL symbol. We also link ALGLIB with appropriate (32-bit or 64-bit) alglib???_??mkl.lib static library, which is an import library for special lightweight MKL distribution, shipped with ALGLIB. We also should copy to current directory appropriate alglib???_??mkl.dll binary file which contains Intel MKL.

Linking with MKL extensions (COMMERCIAL EDITION)
> copy alglib-cpp\addons-mkl\alglib*64mkl.lib .
> copy alglib-cpp\addons-mkl\alglib*64mkl.dll .
> cl /I. /EHsc /Fedemo.exe /Ox /DAE_CPU=AE_INTEL /DAE_OS=AE_WINDOWS /DAE_MKL *.cpp alglib*64mkl.lib
> demo.exe
Performance is 33.1 GFLOPS

From 0.7 GFLOPS to 33.1 GFLOPS - you may see that commercial version of ALGLIB is really worth it!

4.5.3 Compiling under Linux

File listing below contains the very basic program which uses ALGLIB to perform matrix-matrix multiplication. After that program evaluates performance of GEMM (function being called) and prints result to console. We'll show how performance of this program continually increases as we add more and more sophisticated compiler options.

demo.cpp (LINUX EXAMPLE)
#include <stdio.h>
#include <sys/time.h>
#include "LinAlg.h"

double counter()
{
    struct timeval now;
    alglib_impl::ae_int64_t r, v;
    gettimeofday(&now, NULL);
    v = now.tv_sec;
    r = v*1000;
    v = now.tv_usec/1000;
    r = r+v;
    return 0.001*r;
}

int main()
{
    alglib::real_2d_array a, b, c;
    int n = 2000;
    int i, j;
    double timeneeded, flops;
    
    // Initialize arrays
    a.setlength(n, n);
    b.setlength(n, n);
    c.setlength(n, n);
    for(i=0; i<n; i++)
        for(j=0; j<n; j++)
        {
            a[i][j] = alglib::randomreal()-0.5;
            b[i][j] = alglib::randomreal()-0.5;
            c[i][j] = 0.0;
        }
    
    // Set global threading settings (applied to all ALGLIB functions);
    // default is to perform serial computations, unless parallel execution
    // is activated. Parallel execution tries to utilize all cores; this
    // behavior can be changed with alglib::setnworkers() call.
    alglib::setglobalthreading(alglib::parallel);
    
    // Perform matrix-matrix product.
    flops = 2*pow((double)n, (double)3);
    timeneeded = counter();
    alglib::rmatrixgemm(
        n, n, n,
        1.0,
        a, 0, 0, 0,
        b, 0, 0, 1,
        0.0,
        c, 0, 0);
    timeneeded = counter()-timeneeded;
    
    // Evaluate performance
    printf("Performance is %.1f GFLOPS\n", (double)(1.0E-9*flops/timeneeded));
    
    return 0;
}

Examples below cover x64 Linux compilation from command line with GCC. We assume that current directory is clean before example is executed (i.e. it has ONLY demo.cpp file and alglib-cpp folder). We used 2.3 GHz 2-core Skylake CPU with 2x Hyperthreading enabled for this test.

First example covers platform-agnostic compilation without optimization settings - the most simple way to compile ALGLIB. This step is same in both open source and commercial editions. However, in platform-agnostic mode ALGLIB is unable to use all performance related features present in commercial edition.

We starts from copying all cpp and h files to current directory, then we will compile them along with demo.cpp. In this and following examples we will omit compiler output for the sake of simplicity.

OS-agnostic mode, no compiler optimizations
> cp alglib-cpp/src/* .
> g++ -I. -o demo.out *.cpp
> ./demo.out
Performance is 0.9 GFLOPS

Let's add -O3 to compiler parameters.

OS-agnostic mode, -O3 optimization
> g++ -I. -o demo.out -O3 *.cpp
> ./demo.out
Performance is 2.8 GFLOPS

Better, but not impressed. Let's turn on optimizations for x86 architecture: define AE_CPU=AE_INTEL, add -mavx2 -mfma. This option provides some speed-up in both free and commercial editions of ALGLIB.

OS-agnostic mode, ALGLIB knows it is x86/x64
> g++ -I. -o demo.out -O3 -DAE_CPU=AE_INTEL -mavx2 -mfma *.cpp
> ./demo.out
Performance is 5.0 GFLOPS

It is good, but we have 4 cores (in fact, 2 cores - it is 2-way hyperthreaded system) and only one of them was used. Defining AE_OS=AE_POSIX allows ALGLIB to use POSIX threads to parallelize execution of some functions. You should also specify -pthread flag to link with pthreads standard library. Starting from this moment, our example applies only to Commercial Edition.

ALGLIB knows it is POSIX OS on x86/x64 CPU (COMMERCIAL EDITION)
> g++ -I. -o demo.out -O3 -DAE_CPU=AE_INTEL -mavx2 -mfma -DAE_OS=AE_POSIX -pthread *.cpp
> ./demo.out
Performance is 9.0 GFLOPS

Not bad. You may notice that performance growth was ~2x, not 4x. The reason is that we tested ALGLIB on hyperthreaded system: although we have 4 logical cores, they share computational resources of just 2 physical cores. And now we are ready to the final test - linking with MKL extensions.

Linking with MKL extensions differs a bit from standard way of linking with ALGLIB. ALGLIB itself is compiled with one more preprocessor definition: we define AE_MKL symbol. We also link ALGLIB with appropriate alglib???_??mkl.so shared library, which contains special lightweight MKL distribution shipped with ALGLIB.

We should note that on typical Linux system shared libraries are not loaded from current directory by default. Either you install them into one of the system directories, or use some way to tell linker/loader that you want to load shared library from some specific directory. For our example we choose to update LD_LIBRARY_PATH environment variable.

Linking with MKL extensions (COMMERCIAL EDITION, relevant for ALGLIB 3.13)
> cp alglib-cpp/addons-mkl/libalglib*64mkl.so .
> ls *.so
libalglib313_64mkl.so
> g++ -I. -o demo.out -O3 -DAE_CPU=AE_INTEL -DAE_OS=AE_POSIX -pthread -DAE_MKL -L. *.cpp -lalglib313_64mkl
> LD_LIBRARY_PATH=.
> export LD_LIBRARY_PATH
> ./demo.out
Performance is 33.8 GFLOPS

Final result: from 0.9 GFLOPS to 33.8 GFLOPS!

5 Using ALGLIB

5.1 Thread-safety

Both open source and commercial versions of ALGLIB are 100% thread-safe as long as different user threads work with different instances of objects/arrays. Thread-safety is guaranteed by having no global shared variables.

However, any kind of sharing ALGLIB objects/arrays between different threads is potentially hazardous. Even when this object is seemingly used in read-only mode!

Say, you use ALGLIB neural network NET to process two input vectors X0 and X1, and get two output vectors Y0 and Y1. You may decide that neural network is used in read-only mode which does not change state of NET, because output is written to distinct arrays Y. Thus, you may want to process these vectors from parallel threads.

But it is not read-only operation, even if it looks like that! Neural network object NET allocates internal temporary buffers, which are modified by neural processing functions. Thus, sharing one instance of neural network between two threads is thread-unsafe!

5.2 Global definitions

ALGLIB defines several conditional symbols (all start with "AE_" which means "ALGLIB environment") and two namespaces: alglib_impl (contains computational core) and alglib (contains C++ interface).

Although this manual mentions both alglib_impl and alglib namespaces, only alglib namespace should be used by you. It contains user-friendly C++ interface with automatic memory management, exception handling and all other nice features. alglib_impl is less user-friendly, is less documented, and it is too easy to crash your system or cause memory leak if you use it directly.

5.3 Datatypes

ALGLIB (ap.h header) defines several "basic" datatypes (types which are used by all packages) and many package-specific datatypes. "Basic" datatypes are:

Package-specific datatypes are classes which can be divided into two distinct groups:

5.4 Constants

The most important constants (defined in the ap.h header) from ALGLIB namespace are:

5.5 Functions

The most important "basic" functions from ALGLIB namespace (ap.h header) are:

5.6 Working with vectors and matrices

ALGLIB (ap.h header) supports matrixes and vectors (one-dimensional and two-dimensional arrays) of variable size, with numeration starting from zero.

Everything starts from array creation. You should distinguish the creation of array class instance and the memory allocation for the array. When creating the class instance, you can use constructor without any parameters, that creates an empty array without any elements. An attempt to address them may cause the program failure.

You can use copy and assignment constructors that copy one array into another. If, during the copy operation, the source array has no memory allocated for the array elements, destination array will contain no elements either. If the source array has memory allocated for its elements, destination array will allocate the same amount of memory and copy the elements there. That is, the copy operation yields into two independent arrays with indentical contents.

You can also create array from formatted string like "[]", "[true,FALSE,tRUe]", "[[]]]" or "[[1,2],[3.2,4],[5.2]]" (note: '.' is used as decimal point independently from locale settings).

alglib::boolean_1d_array b1;
b1 = "[true]";

alglib::real_2d_array r2("[[2,3],[3,4]]");
alglib::real_2d_array r2_1("[[]]");
alglib::real_2d_array r2_2(r2);
r2_1 = r2;

alglib::complex_1d_array c2;
c2 = "[]";
c2 = "[0]";
c2 = "[1,2i]";
c2 = "[+1-2i,-1+5i]";
c2 = "[ 4i-2,  8i+2]";
c2 = "[+4i-2, +8i+2]";
c2 = "[-4i-2, -8i+2]";

After an empty array has been created, you can allocate memory for its elements, using the setlength() method. The content of the created array elements is not defined. If the setlength method is called for the array with already allocated memory, then, after changing its parameters, the newly allocated elements also become undefined and the old content is destroyed.

alglib::boolean_1d_array b1;
b1.setlength(2);

alglib::integer_2d_array r2;
r2.setlength(4,3);

Another way to initialize array is to call setcontent() method. This method accepts pointer to data which are copied into newly allocated array. Vectors are stored in contiguous order, matrices are stored row by row.

alglib::real_1d_array r1;
double _r1[] = {2, 3};
r1.setcontent(2,_r1);

alglib::real_2d_array r2;
double _r2[] = {11, 12, 13, 21, 22, 23};
r2.setcontent(2,3,_r2);

You can also attach real vector/matrix object to already allocated double precision array (attaching to boolean/integer/complex arrays is not supported). In this case, no actual data is copied, and attached vector/matrix object becomes a read/write proxy for external array.

alglib::real_1d_array r1;
double a1[] = {2, 3};
r1.attach_to_ptr(2,a1);

alglib::real_2d_array r2;
double a2[] = {11, 12, 13, 21, 22, 23};
r2.attach_to_ptr(2,3,_r2);

To access the array elements, an overloaded operator() or operator[] can used. That is, the code addressing the element of array a with indexes [i,j] can look like a(i,j) or a[i][j].

alglib::integer_1d_array a("[1,2,3]");
alglib::integer_1d_array b("[3,9,27]");
a[0] = b(0);

alglib::integer_2d_array c("[[1,2,3],[9,9,9]]");
alglib::integer_2d_array d("[[3,9,27],[8,8,8]]");
d[1][1] = c(0,0);

You can access contents of 1-dimensional array by calling getcontent() method which returns pointer to the array memory. For historical reasons 2-dimensional arrays do not provide getcontent() method, but you can use create reference to any element of array. 2-dimensional arrays store data in row-major order with aligned rows (i.e. generally distance between rows is not equal to number of columns). You can get stride (distance between consequtive elements in different rows) with getstride() call.

alglib::integer_1d_array a("[1,2]");
alglib::real_2d_array b("[[0,1],[10,11]]");

alglib::ae_int_t *a_row = a.getcontent();

// all three pointers point to the same location
double *b_row0 = &b[0][0];
double *b_row0_2 = &b(0,0);
double *b_row0_3 = b[0];

// advancing to the next row of 2-dimensional array
double *b_row1 = b_row0 + b.getstride();

Finally, you can get array size with length(), rows() or cols() methods:

alglib::integer_1d_array a("[1,2]");
alglib::real_2d_array b("[[0,1],[10,11]]");

printf("%ld\n", (long)a.length());
printf("%ld\n", (long)b.rows());
printf("%ld\n", (long)b.cols());

5.7 Using functions: 'expert' and 'friendly' interfaces

Most ALGLIB functions provide two interfaces: 'expert' and 'friendly'. What is the difference between two? When you use 'friendly' interface, ALGLIB:

When you use 'expert' interface, ALGLIB requires caller to explicitly specify size of input arguments. If vector/matrix is larger than size being specified (say, N), only N leading elements are used.

Here are several examples of 'friendly' and 'expert' interfaces:

#include "interpolation.h"

...

alglib::real_1d_array    x("[0,1,2,3]");
alglib::real_1d_array    y("[1,5,3,9]");
alglib::real_1d_array   y2("[1,5,3,9,0]");
alglib::spline1dinterpolant s;

alglib::spline1dbuildlinear(x, y, 4, s);  // 'expert' interface is used
alglib::spline1dbuildlinear(x, y, s);     // 'friendly' interface - input size is
                                          // automatically determined

alglib::spline1dbuildlinear(x, y2, 4, s); // y2.length() is 5, but it will work

alglib::spline1dbuildlinear(x, y2, s);    // it won't work because sizes of x and y2
                                          // are inconsistent

5.8 Handling errors

ALGLIB uses two error handling strategies:

What is actually done depends on function being used and error being reported:

  1. if function returns some error code and has corresponding value for this kind of error, ALLGIB returns error code
  2. if function does not return error code (or returns error code, but there is no code for error being reported), ALGLIB throws alglib::ap_error exception. Exception object has msg parameter which contains short description of error.

To make things clear we consider several examples of error handling.

Example 1. mincgreate function creates nonlinear CG optimizer. It accepts problem size N and initial point X. Several things can go wrong - you may pass array which is too short, filled by NAN's, or otherwise pass incorrect data. However, this function returns no error code - so it throws an exception in case something goes wrong. There is no other way to tell caller that something went wrong.

Example 2. rmatrixinverse function calculates inverse matrix. It returns error code, which is set to +1 when problem is solved and is set to -3 if singular matrix was passed to the function. However, there is no error code for matrix which is non-square or contains infinities. Well, we could have created corresponding error codes - but we didn't. So if you pass singular matrix to rmatrixinverse, you will get completion code -3. But if you pass matrix which contains INF in one of its elements, alglib::ap_error will be thrown.

First error handling strategy (error codes) is used to report "frequent" errors which can occur during normal execution of user program. Second error handling strategy (exceptions) is used to report "rare" errors which are result of serious flaws in your program (or ALGLIB) - infinities/NAN's in the inputs, inconsistent inputs, etc.

5.9 Working with Level 1 BLAS functions

ALGLIB (ap.h header) includes following Level 1 BLAS functions:

Each Level 1 BLAS function accepts input stride and output stride, which are expected to be positive. Input and output vectors should not overlap. Functions operating with complex vectors accept additional parameter conj_src, which specifies whether input vector is conjugated or not.

For each real/complex function there exists "simple" companion which accepts no stride or conjugation modifier. "Simple" function assumes that input/output stride is +1, and no input conjugation is required.

alglib::real_1d_array    rvec("[0,1,2,3]");
alglib::real_2d_array    rmat("[[1,2],[3,4]]");
alglib::complex_1d_array cvec("[0+1i,1+2i,2-1i,3-2i]");
alglib::complex_2d_array cmat("[[3i,1],[9,2i]]");

alglib::vmove(&rvec[0],  1, &rmat[0][0], rmat.getstride(), 2); // now rvec is [1,3,2,3]

alglib::vmove(&cvec[0],  1, &cmat[0][0], rmat.getstride(), "No conj", 2); // now cvec is [3i, 9, 2-1i, 3-2i]
alglib::vmove(&cvec[2],  1, &cmat[0][0], 1,                "Conj", 2);    // now cvec is [3i, 9, -3i,  1]

Here is full list of Level 1 BLAS functions implemented in ALGLIB:

double vdotproduct(
    const double *v0,
     ae_int_t stride0,
     const double *v1,
     ae_int_t stride1,
     ae_int_t n);
double vdotproduct(
    const double *v1,
     const double *v2,
     ae_int_t N);

alglib::complex vdotproduct(
    const alglib::complex *v0,
     ae_int_t stride0,
     const char *conj0,
     const alglib::complex *v1,
     ae_int_t stride1,
     const char *conj1,
     ae_int_t n);
alglib::complex vdotproduct(
    const alglib::complex *v1,
     const alglib::complex *v2,
     ae_int_t N);

void vmove(
    double *vdst,
      ae_int_t stride_dst,
     const double* vsrc,
      ae_int_t stride_src,
     ae_int_t n);
void vmove(
    double *vdst,
     const double* vsrc,
     ae_int_t N);

void vmove(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex* vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n);
void vmove(
    alglib::complex *vdst,
     const alglib::complex* vsrc,
     ae_int_t N);

void vmoveneg(
    double *vdst,
      ae_int_t stride_dst,
     const double* vsrc,
      ae_int_t stride_src,
     ae_int_t n);
void vmoveneg(
    double *vdst,
     const double *vsrc,
     ae_int_t N);

void vmoveneg(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex* vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n);
void vmoveneg(
    alglib::complex *vdst,
     const alglib::complex *vsrc,
     ae_int_t N);

void vmove(
    double *vdst,
      ae_int_t stride_dst,
     const double* vsrc,
      ae_int_t stride_src,
     ae_int_t n,
     double alpha);
void vmove(
    double *vdst,
     const double *vsrc,
     ae_int_t N,
     double alpha);

void vmove(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex* vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n,
     double alpha);
void vmove(
    alglib::complex *vdst,
     const alglib::complex *vsrc,
     ae_int_t N,
     double alpha);

void vmove(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex* vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n,
     alglib::complex alpha);
void vmove(
    alglib::complex *vdst,
     const alglib::complex *vsrc,
     ae_int_t N,
     alglib::complex alpha);

void vadd(
    double *vdst,
      ae_int_t stride_dst,
     const double *vsrc,
      ae_int_t stride_src,
     ae_int_t n);
void vadd(
    double *vdst,
     const double *vsrc,
     ae_int_t N);

void vadd(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex *vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n);
void vadd(
    alglib::complex *vdst,
     const alglib::complex *vsrc,
     ae_int_t N);

void vadd(
    double *vdst,
      ae_int_t stride_dst,
     const double *vsrc,
      ae_int_t stride_src,
     ae_int_t n,
     double alpha);
void vadd(
    double *vdst,
     const double *vsrc,
     ae_int_t N,
     double alpha);

void vadd(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex *vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n,
     double alpha);
void vadd(
    alglib::complex *vdst,
     const alglib::complex *vsrc,
     ae_int_t N,
     double alpha);

void vadd(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex *vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n,
     alglib::complex alpha);
void vadd(
    alglib::complex *vdst,
     const alglib::complex *vsrc,
     ae_int_t N,
     alglib::complex alpha);

void vsub(
    double *vdst,
      ae_int_t stride_dst,
     const double *vsrc,
      ae_int_t stride_src,
     ae_int_t n);
void vsub(
    double *vdst,
     const double *vsrc,
     ae_int_t N);

void vsub(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex *vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n);
void vsub(
    alglib::complex *vdst,
     const alglib::complex *vsrc,
     ae_int_t N);

void vsub(
    double *vdst,
      ae_int_t stride_dst,
     const double *vsrc,
      ae_int_t stride_src,
     ae_int_t n,
     double alpha);
void vsub(
    double *vdst,
     const double *vsrc,
     ae_int_t N,
     double alpha);

void vsub(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex *vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n,
     double alpha);
void vsub(
    alglib::complex *vdst,
     const alglib::complex *vsrc,
     ae_int_t N,
     double alpha);

void vsub(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     const alglib::complex *vsrc,
     ae_int_t stride_src,
     const char *conj_src,
     ae_int_t n,
     alglib::complex alpha);
void vsub(
    alglib::complex *vdst,
     const alglib::complex *vsrc,
     ae_int_t N,
     alglib::complex alpha);

void vmul(
    double *vdst,
      ae_int_t stride_dst,
     ae_int_t n,
     double alpha);
void vmul(
    double *vdst,
     ae_int_t N,
     double alpha);

void vmul(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     ae_int_t n,
     double alpha);
void vmul(
    alglib::complex *vdst,
     ae_int_t N,
     double alpha);

void vmul(
    alglib::complex *vdst,
     ae_int_t stride_dst,
     ae_int_t n,
     alglib::complex alpha);
void vmul(
    alglib::complex *vdst,
     ae_int_t N,
     alglib::complex alpha);

5.10 Reading data from CSV files

ALGLIB (ap.h header) has alglib::read_csv() function which allows to read data from CSV file. Entire file is loaded into memory as double precision 2D array (alglib::real_2d_array object). This function provides following features:

See comments on alglib::read_csv() function for more information about its functionality.

6 Working with commercial version

6.1 Benefits of commercial version

Commercial version of ALGLIB for C++ features four important improvements over open source one:

6.2 Working with SIMD support (Intel/AMD users)

ALGLIB for C++ can utilize SIMD instructions supported by Intel and AMD processors. This feature is optional and must be explicitly turned on during compile-time. If you do not activate it, ALGLIB will use generic C code, without any processor-specific assembly/intrinsics.

Thus, if you turn on this feature, your code will run faster on x86_32 and x86_64 processors, but will be unportable to non-x86 platforms (and Intel MIC platform, which is not exactly x86!). From the other side, if you do not activate this feature, your code will be portable to almost any modern CPU (SPARC, ARM, ...).

In order to turn on x86-specific optimizations, you should define AE_CPU=AE_INTEL preprocessor definition at global level. It will tell ALGLIB to use SIMD intrinsics supported by GCC, MSVC and Intel compilers. Additionally you should tell compiler to generate SIMD-capable code. It can be done in the project settings of your IDE or in the command line:


GCC example:
> g++ -msse2 -I. -DAE_CPU=AE_INTEL *.cpp -lm

MSVC example:
> cl /I. /EHsc /DAE_CPU=AE_INTEL *.cpp

6.3 Using multithreading

6.3.1 General information

Commercial version of ALGLIB includes out-of-the-box support for multithreading. Many (not all) computationally intensive problems can be solved in multithreaded mode. You should read comments on specific ALGLIB functions to determine what can be multithreaded and what can not.

ALGLIB does not depend on vendor/compiler support for technologies like OpenMP/MPI/... Under Windows ALGLIB uses OS threads and custom synchronization framework. Under POSIX-compatible OS (Solaris, Linux, FreeBSD, NetBSD, OpenBSD, ...) ALGLIB uses POSIX Threads (standard *nix library which is shipped with any POSIX system) with its threading and synchronization primitives. It gives ALGLIB unprecedented portability across operating systems and compilers. ALGLIB does not depend on presence of any custom multithreading library or compiler support for any multithreading technology.

6.3.2 Compiling ALGLIB in the multithreaded mode

You should compile ALGLIB in OS-specific mode by #defining either AE_OS=AE_WINDOWS or AE_OS=AE_POSIX (or AE_OS=AE_LINUX, which means "POSIX with Linux extensions") at the compile time, depending on the OS being used. The former corresponds to any modern OS (32/64-bit Windows XP and later) from Windows family, while the latter two mean almost any POSIX-compatible OS or any OS from the Linux family. When compiling on POSIX/Linux, do not forget to link ALGLIB with libpthread library.

When compiled without OS-specific switches, ALGLIB runs in OS-agnostic mode and ignores all kinds of parallelism settings.

6.3.3 Two kinds of parallelism

ALGLIB supports two types of parallelism:

Internal parallelism refers to ALGLIB's capability to speed up operations that are embedded within its own functions. When this feature is active, its presence is mostly noticeable through the reduced execution time of computation-heavy functions. To use internal parallelism, you only need to enable it in ALGLIB, without any modifications to your existing code. This parallelism is available in a range of ALGLIB functions, covering areas from linear algebra to data analysis.

Callback parallelism, on the other hand, allows certain ALGLIB optimizers to send multiple parallel requests to user-defined callbacks. For example, numerical differentiation is inherently parallel as each component of a gradient can be computed independently. Similarly, optimizers like differential evolution can issue several requests simultaneously, which can be processed in parallel.

While it is ALGLIB who manages the creation and termination of computing threads to parallelize these batch requests, callback parallelism still requires you to provide a thread-safe callback. This callback must be capable of handling multiple simultaneous calls from multiple threads. Unlike internal parallelism, callback parallelism is not automatically managed by ALGLIB; it requires active cooperation from your side to ensure that your code is re-entrant and thread-safe.

Both types of parallelism can be used in various combinations. You might choose to employ only internal parallelism, only callback parallelism, both, or neither.

6.3.4 Activating parallelism

ALGLIB provides the flexibility to define parallelism settings at both global and function-specific levels. Global settings can be configured using the alglib::setglobalthreading function. For function-local settings, you specify them by passing an instance of the alglib::xparams type to the particular computational function you are using.

It's important to note that these function-local settings are actually call-local. This means that making simultaneous calls to the same function with different settings will result in the use of different parallelism types for each call.

Another important point is that function-local settings do not propagate to other (even connected) functions. E.g., if you create an instance of MINNLC optimizer with minnlccreate and then start optimization with minnlcoptimize, then it is the latter function which actually does the job. Any parallelism-related settings should be passed directly to minnlcoptimize and not to minnlccreate, because minnlccreate does nothing except for allocating data structures.

The alglib::xparams structure, which stores parallelism settings, can be set to the following values:

You can combine values pertaining to different types of parallelism (internal and callback) using the OR operator, such as alglib::serial|alglib::parallel_callbacks. However, attempting to combine conflicting values like alglib::parallel_callbacks|alglib::serial_callbacks will result in an invalid setting.

If a value for a specific type of parallelism is not provided at the function-local level — such as alglib::serial, which defines settings for internal parallelism but omits callback parallelism — then that specific setting defaults to the global settings as determined by alglib::setglobalthreading. In cases where no global settings are specified for a particular parallelism type, serial execution is the default choice.

IMPORTANT: Activating internal parallelism in ALGLIB does not automatically mean that computations will be parallelized. ALGLIB assesses whether the problem at hand is sufficiently large to warrant efficient parallelization. This decision is made automatically by the library. In contrast, enabling callback parallelism always leads to ALGLIB creating worker threads for handling parallel requests.

This occurs even if the requests are too lightweight to benefit from multithreading. Be aware that this could potentially slow down your application due to the overhead of parallel synchronization, rather than speeding it up. The reason is that ALGLIB sees user-defined callbacks as black boxes. Therefore, when using callback parallelism, the decision on its effectiveness rests with you.

Another important point is that only some ALGLIB optimizers support callback parallelism. This feature was introduced only in ALGLIB 4.01.0, and some optimizers still have to be refactored.

6.3.5 Controlling cores count

ALGLIB automatically determines number of cores on application startup. On Windows it is done using GetSystemInfo() call. On POSIX systems ALGLIB performs sysconf(_SC_NPROCESSORS_ONLN) system call. This system call is supported by all modern POSIX-compatible systems: Solaris, Linux, FreeBSD, NetBSD, OpenBSD.

By default, ALGLIB uses all available cores (when told to parallelize calculations). Such behavior may be changed with setnworkers() call:

You may want to specify maximum number of worker threads during compile time by means of preprocessor definition AE_NWORKERS=N. You can add this definition to compiler command line or change corresponding project settings in your IDE. Here N can be any positive number. ALGLIB will use exactly N worker threads, unless being told to use less by setnworkers() call.

Some old POSIX-compatible operating systems do not support sysconf(_SC_NPROCESSORS_ONLN) system call which is required in order to automatically determine number of active cores. On these systems you should specify number of cores manually at compile time. Without it ALGLIB will run in single-threaded mode.

6.3.6 More on parallel callbacks

As mentioned earlier, callback parallelism is a powerful feature that enables faster numerical differentiation and faster highly parallel optimization methods like differential evolution. To effectively use callback parallelism, it is essential that:

Achieving the latter can be challenging if your callback uses temporary data structures (one instance per invocation). Generally, it is inefficient to allocate temporary arrays on heap every time the callback is invoked. Memory allocation and deallocation requires synchronization which hampers parallelism. The only exception is when evaluating your objective function is so costly that memory allocation becomes a negligible factor.

Similarly, storing preallocated temporaries in a lock-protected pool can also be inefficient. Lock-free data structures relying on interlocked operations offer a somewhat better option, but even lock-free code still involves some synchronization, which can become a bottleneck.

The ideal strategy is to design your callback code to avoid synchronization entirely. This can be achieved by preallocating an array of alglib::getmaxnworkers() temporaries at the outset and using alglib::getcallbackworkeridx() to identify the worker thread that invoked the callback. This index allows retrieval of a temporary structure from the array. Since different worker threads have distinct indexes, this approach achieves thread safety with no synchronization required.

This option needs C++11 or later because it relies on thread-local storage being properly supported by the language. When compiled under earlier versions of the standard, alglib::getcallbackworkeridx() is not available.

6.3.7 SMT (CMT/hyper-threading) issues

Simultaneous multithreading (SMT) also known as Hyper-threading (Intel) and Cluster-based Multithreading (AMD) is a CPU design where several (usually two) logical cores share resources of one physical core. Say, on dual-core system with 2x HT scale factor you will see 4 logical cores. Each pair of these 4 cores, however, share same hardware resources. Thus, you may get only marginal speedup when running highly optimized software which fully utilizes CPU resources.

Say, if one thread occupies floating-point unit, another thread on the same physical core may work with integer numbers at the same time without any performance penalties. In this case you may get some speedup due to having additional cores. But if both threads keep FPU unit 100% busy, they won't get any multithreaded speedup.

So, if 2 math-intensive threads are dispatched by OS scheduler to different physical cores, you will get 2x speedup due to use of multithreading. But if these threads are dispatched to different logical cores - but same physical core - you won't get any speedup at all! One physical core will be 100% busy, and another one will be 100% idle. From the other side, if you start four threads instead of two, your system will be 100% utilized independently of thread scheduling details.

Let we stress it one more time - multithreading speedup on SMT systems is highly dependent on number of threads you are running and decisions made by OS scheduler. It is not 100% deterministic! With "true SMP" when you run 2 threads, you get 2x speedup (or 1.95, or 1.80 - it depends on algorithm, but this factor is always same). With SMT when you run 2 threads you may get your 2x speedup - or no speedup at all. Modern OS schedulers do a good job on single-socket hardware, but even in this "simple" case they give no guarantees of fair distribution of hardware resources. And things become a bit tricky when you work with multi-socket hardware. On SMT systems the only guaranteed way to 100% utilize your CPU is to create as many worker threads as there are logical cores. In this case OS scheduler has no chance to make its work in a wrong way.

6.4 Linking with Intel MKL

6.4.1 Using lightweight Intel MKL supplied by ALGLIB Project

Commercial edition of ALGLIB includes MKL extensions - special lightweight distribution of Intel MKL, highly optimized numerical library from Intel - and precompiled ALGLIB-MKL interface libraries. Linking your programs with MKL extensions allows you to run ALGLIB with maximum performance. MKL binaries are delivered for x86/x64 Windows and x64 Linux platforms.

Unlike the rest of the library, MKL extensions are distributed in binary-only form. ALGLIB itself is still distributed in source code form, but Intel MKL and ALGLIB-MKL interface are distributed as precompiled dynamic/static libraries. We can not distribute them in source because of license restrictions associated with Intel MKL. Also due to license restrictions we can not give you direct access to MKL functionality. You may use MKL to accelerate ALGLIB - without paying for MKL license - but you may not call its functions directly. It is technically possible, but strictly prohibited by both MKL's EULA and ALGLIB License Agreement. If you want to work with MKL, you should obtain separate license from Intel (as of 2018, free licenses are available).

MKL extensions are located in the /alglib-cpp/addons-mkl subdirectory of the ALGLIB distribution. This directory includes following files:

Here ??? stands for specific ALGLIB version: 313 for ALGLIB 3.13, and so on. Files above are just MKL extensions - ALGLIB itself is not included in these binaries, and you still have to compile primary ALGLIB distribution.

In order to activate MKL extensions you should:

Several examples of ALGLIB+MKL usage are given in the 'compiling ALGLIB: examples' section.

6.4.2 Using your own installation of Intel MKL

If you bought separate license for Intel MKL, and want to use your own installation of MKL - and not our lightweight distribution - then you should compile ALGLIB as it was told in the previous section, with all necessary preprocessor definitions (AE_OS=AE_WINDOWS or AE_OS=AE_POSIX, AE_CPU=AE_INTEL and AE_MKL defined). But instead of linking with MKL Extensions binary, you should add to your project alglib2mkl.c file from addons-mkl directory and compile it (as C file) along with the rest of ALGLIB.

This C file implements interface between MKL and ALGLIB. Having this file in your project and defining AE_MKL preprocessor definition results in ALGLIB using MKL functions.

However, this C file is just interface! It is your responsibility to make sure that C/C++ compiler can find MKL headers, and appropriate MKL static/dynamic libraries are linked to your application.

7 Advanced topics

7.1 Using Red Zones to find memory access violations

Red Zone is a fixed size area added before and after each dynamically allocated block. Red Zone is filled by the special red zone control value during its allocation. When the dynamically allocated block is freed, its control value is checked. Any change means that someone (either ALGLIB or user code that works with ALGLIB-allocated arrays) performed an out-of-bounds write. Red Zones are essential for finding memory access errors that silently corrupt your data and/or crash your program.

ALGLIB for C++ comes with Red Zones support disabled by default. It can be enabled by #defining ALGLIB_REDZONE=512 (or some other multiple of 64 bytes) at the global level. Larger values mean that a larger buffer is added, and a better opportunity to detect large misses. E.g., if your write is just 8 bytes away from the array, a 64-byte red zone is enough to detect it. On the other hand, a 1024-byte miss needs a red zone at least that large.

ALGLIB checks the red zone of the dynamic array when it is deallocated by its destructor (or reallocated). In case the red zone is damaged, ALGLIB prints a message to stderr and terminates the program. Two kinds of errors are distinguished: writes past the end, and writes prior to the beginning. You can find out the exact variable being damaged by looking to the trace stack. It is one which is currently deallocated or reallocated.

Red zones add some memory overhead (insignificant for large arrays) and some modest performance overhead. Your program may become a few percent slower (in the very worst case - a few tens of percent).

7.2 Replacing stdlib rand() as an entropy source with OpenSSL implementation

ALGLIB uses and provides to its users its own random numbers generator. Whilst it is not cryptographically secure, it is good enough for scientific purposes. However, this generator has to be primed with an external entropy source - one which provides random or pseudorandom seeds.

In its default configuration ALGLIB obtains these seeds with stdlib rand(), whish was chosen as the default entropy source for portability reasons. In most cases it is acceptable - all we need is just two integers which will be used to prime a high-quality generator implemented in ALGLIB. However, some users may prefer to use other sources of entropy. Entropy source used by ALGLIB can be configured by #defining ALGLIB_ENTROPY_SRC to be one of the following:

7.3 Exception-free mode

ALGLIB for C++ can be compiled in exception-free mode, with exceptions (throw/try/catch constructs) being disabled at compiler level. Such feature is sometimes used by developers of embedded software.

ALGLIB uses two-level model of errors: "expected" errors (like degeneracy of linear system or inconsistency of linear constraints) are reported with dedicated completion codes, and "critical" errors (like malloc failures, unexpected NANs/INFs in the input data and so on) are reported with exceptions. The idea is that it is hard to put (and handle) completion codes in every ALGLIB function, so we use exceptions to signal errors which should never happen under normal circumstances.

Internally ALGLIB for C++ is implemented as C++ wrapper around computational core written in pure C. Thus, internals of ALGLIB core use C-specific methods of error handling - completion codes and setjmp/longjmp functions. These error handling strategies are combined with sophisticated machinery of C memory management which makes sure that not even a byte of dynamic memory is lost when we make longjmp to the error handler. So, the only point where C++ exceptions are actually used is a boundary between C core and C++ interface.

If you choose to use exceptions (default mode), ALGLIB will throw an exception with short textual description of the situation. And if you choose to work without exceptions, ALGLIB will set global error flag and silently return from the current function/constructor/... instead of throwing an exception. Due to portability issues this error flag is made to be a non-TLS variable, i.e. it is shared between different threads. So, you can use exception-free error handling only in single-threaded programs - although multithreaded programs won't break, there is no way to determine which thread caused an "exception without exceptions".

Exception-free method of reporting critical errors can be activated by #defining two preprocessor symbols at global level:

We must also note that exception-free mode is incompatible with OS-aware compiling: you can not have AE_OS=??? defined together with AE_NO_EXCEPTIONS.

After you #define all the necessary preprocessor symbols, two functions will appear in alglib namespace:

You must check error flag after EVERY operation with ALGLIB objects and functions. In addition to calling computational ALGLIB functions, following kinds of operations may result in "exception":

7.4 Partial compilation

Due to ALGLIB modular structure it is possible to selectively enable/disable some of its subpackages along with their dependencies. Deactivation of ALGLIB source code is performed at preprocessor level - compiler does not even see disabled code. Partial compilation can be used for two purposes:

You can activate partial compilation by #defining at global level following symbols:

7.5 Testing ALGLIB

There are three test suites in ALGLIB: computational tests, interface tests, extended tests. Computational tests are located in /tests/test_c.cpp. They are focused on numerical properties of algorithms, stress testing and "deep" tests (large automatically generated problems). They require significant amount of time to finish (tens of minutes).

Interface tests are located in /tests/test_i.cpp. These tests are focused on ability to correctly pass data between computational core and caller, ability to detect simple problems in inputs, and on ability to at least compile ALGLIB with your compiler. They are very fast (about a minute to finish including compilation time).

Extended tests are located in /tests/test_x.cpp. These tests are focused on testing some special properties (say, testing that cloning object indeed results in 100% independent copy being created) and performance of several chosen algorithms.

Running test suite is easy - just

  1. compile one of these files (test_c.cpp, test_i.cpp or test_x.cpp) along with the rest of the library
  2. launch executable you will get. It may take from several seconds (interface tests) to several minutes (computational tests) to get final results

If you want to be sure that ALGLIB will work with some sophisticated optimization settings, set corresponding flags during compile time. If your compiler/system are not in the list of supported ones, we recommend you to run both test suites. But if you are running out of time, run at least test_i.cpp.

8 ALGLIB packages and subpackages

8.1 AlglibMisc package

hqrnd High quality random numbers generator
nearestneighbor Nearest neighbor search: approximate and exact
xdebug Debug functions to test ALGLIB interface generator
 

8.2 DataAnalysis package

bdss Basic dataset functions
clustering Clustering functions (hierarchical, k-means, k-means++)
datacomp Backward compatibility functions
dforest Decision forest classifier (regression model)
filters Different filters used in data analysis
knn K Nearest Neighbors classification/regression
lda Linear discriminant analysis
linreg Linear models
logit Logit models
mcpd Markov Chains for Population/proportional Data
mlpbase Basic functions for neural networks
mlpe Basic functions for neural ensemble models
mlptrain Neural network training
pca Principal component analysis
ssa Singular Spectrum Analysis
 

8.3 DiffEquations package

odesolver Ordinary differential equation solver
 

8.4 FastTransforms package

conv Fast real/complex convolution
corr Fast real/complex cross-correlation
fft Real/complex FFT
fht Real Fast Hartley Transform
 

8.5 Integration package

autogk Adaptive 1-dimensional integration
gkq Gauss-Kronrod quadrature generator
gq Gaussian quadrature generator
 

8.6 Interpolation package

fitsphere Fitting circle/sphere to data (least squares, minimum circumscribed, maximum inscribed, minimum zone)
idw Inverse distance weighting: interpolation/fitting with improved Shepard-like algorithm
intcomp Backward compatibility functions
lsfit Fitting with least squates target function (linear and nonlinear least-squares)
parametric Parametric curves
polint Polynomial interpolation/fitting
ratint Rational interpolation/fitting
rbf Scattered N-dimensional interpolation with RBF models
spline1d 1D spline interpolation/fitting
spline2d 2D spline interpolation and fitting
spline3d 3D spline interpolation
 

8.7 LinAlg package

ablas Level 2 and Level 3 BLAS operations
bdsvd Bidiagonal SVD
evd Direct and iterative eigensolvers
inverseupdate Sherman-Morrison update of the inverse matrix
matdet Determinant calculation
matgen Random matrix generation
matinv Matrix inverse
normestimator Estimates norm of the sparse matrix (from below)
ortfac Real/complex QR/LQ, bi(tri)diagonal, Hessenberg decompositions
rcond Condition number estimates
schur Schur decomposition
sparse Sparse matrices
spdgevd Generalized symmetric eigensolver
svd Singular value decomposition
trfac LU and Cholesky decompositions (dense and sparse)
 

8.8 Optimization package

minbc Box constrained optimizer with fast activation of multiple constraints per step
minbleic Bound constrained optimizer with additional linear equality/inequality constraints
mincg Conjugate gradient optimizer
mincomp Backward compatibility functions
mindf Derivative-free and global optimization
minlbfgs Limited memory BFGS optimizer
minlm Improved Levenberg-Marquardt optimizer
minlp Linear programming suite
minmo Multi-objective optimizer
minnlc Nonlinear programming solver (analytic gradient, numdiff, model-based DFO)
minns Nonsmooth constrained optimizer
minqp Quadratic optimization with linear, quadratic and conic constraints
nls Nonlinear least squares (derivative-free)
optguardapi OptGuard integrity checking for nonlinear models
opts Internal service functions
 

8.9 Solvers package

directdensesolvers Direct dense linear solvers
directsparsesolvers Direct sparse linear solvers
iterativesparse Sparse linear iterative solvers (GMRES)
lincg Sparse linear CG solver
linlsqr Sparse linear LSQR solver
nleq Solvers for nonlinear equations
polynomialsolver Polynomial solver
 

8.10 SpecialFunctions package

airyf Airy functions
bessel Bessel functions
betaf Beta function
binomialdistr Binomial distribution
chebyshev Chebyshev polynomials
chisquaredistr Chi-Square distribution
dawson Dawson integral
elliptic Elliptic integrals
expintegrals Exponential integrals
fdistr F-distribution
fresnel Fresnel integrals
gammafunc Gamma function
hermite Hermite polynomials
ibetaf Incomplete beta function
igammaf Incomplete gamma function
jacobianelliptic Jacobian elliptic functions
laguerre Laguerre polynomials
legendre Legendre polynomials
normaldistr Univarite and bivariate normal distribution PDF and CDF
poissondistr Poisson distribution
psif Psi function
studenttdistr Student's t-distribution
trigintegrals Trigonometric integrals
 

8.11 Statistics package

basestat Mean, variance, covariance, correlation, etc.
correlationtests Hypothesis testing: correlation tests
jarquebera Hypothesis testing: Jarque-Bera test
mannwhitneyu Hypothesis testing: Mann-Whitney-U test
stest Hypothesis testing: sign test
studentttests Hypothesis testing: Student's t-test
variancetests Hypothesis testing: F-test and one-sample variance test
wsr Hypothesis testing: Wilcoxon signed rank test
 
cmatrixcopy
cmatrixgemm
cmatrixherk
cmatrixlefttrsm
cmatrixmv
cmatrixrank1
cmatrixrighttrsm
cmatrixsyrk
cmatrixtranspose
rmatrixcopy
rmatrixenforcesymmetricity
rmatrixgemm
rmatrixgemv
rmatrixgencopy
rmatrixger
rmatrixlefttrsm
rmatrixmv
rmatrixrank1
rmatrixrighttrsm
rmatrixsymv
rmatrixsyrk
rmatrixsyvmv
rmatrixtranspose
rmatrixtrsv
rvectorcopy
ablas_d_gemm Matrix multiplication (single-threaded)
ablas_d_syrk Symmetric rank-K update (single-threaded)
/************************************************************************* Copy Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix, must be large enough to store result IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/
void cmatrixcopy(const ae_int_t m, const ae_int_t n, const complex_2d_array &a, const ae_int_t ia, const ae_int_t ja, complex_2d_array &b, const ae_int_t ib, const ae_int_t jb, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates C = alpha*op1(A)*op2(B) +beta*C where: * C is MxN general matrix * op1(A) is MxK matrix * op2(B) is KxN matrix * "op" may be identity transformation, transposition, conjugate transposition Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. IMPORTANT: This function does NOT preallocate output matrix C, it MUST be preallocated by caller prior to calling this function. In case C does not have enough space to store result, exception will be generated. INPUT PARAMETERS M - matrix size, M>0 N - matrix size, N>0 K - matrix size, K>0 Alpha - coefficient A - matrix IA - submatrix offset JA - submatrix offset OpTypeA - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition B - matrix IB - submatrix offset JB - submatrix offset OpTypeB - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition Beta - coefficient C - matrix (PREALLOCATED, large enough to store result) IC - submatrix offset JC - submatrix offset ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 2009-2019 Bochkanov Sergey *************************************************************************/
void cmatrixgemm(const ae_int_t m, const ae_int_t n, const ae_int_t k, const alglib::complex alpha, const complex_2d_array &a, const ae_int_t ia, const ae_int_t ja, const ae_int_t optypea, const complex_2d_array &b, const ae_int_t ib, const ae_int_t jb, const ae_int_t optypeb, const alglib::complex beta, complex_2d_array &c, const ae_int_t ic, const ae_int_t jc, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine calculates C=alpha*A*A^H+beta*C or C=alpha*A^H*A+beta*C where: * C is NxN Hermitian matrix given by its upper/lower triangle * A is NxK matrix when A*A^H is calculated, KxN matrix otherwise Additional info: * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. INPUT PARAMETERS N - matrix size, N>=0 K - matrix size, K>=0 Alpha - coefficient A - matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpTypeA - multiplication type: * 0 - A*A^H is calculated * 2 - A^H*A is calculated Beta - coefficient C - preallocated input/output matrix IC - submatrix offset (row index) JC - submatrix offset (column index) IsUpper - whether upper or lower triangle of C is updated; this function updates only one half of C, leaving other half unchanged (not referenced at all). ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 16.12.2009-22.01.2018 Bochkanov Sergey *************************************************************************/
void cmatrixherk(const ae_int_t n, const ae_int_t k, const double alpha, const complex_2d_array &a, const ae_int_t ia, const ae_int_t ja, const ae_int_t optypea, const double beta, complex_2d_array &c, const ae_int_t ic, const ae_int_t jc, const bool isupper, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine calculates op(A^-1)*X where: * X is MxN general matrix * A is MxM upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition, conjugate transposition Multiplication result replaces X. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+M-1,J1:J1+M-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition X - matrix, actial matrix is stored in X[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 15.12.2009-22.01.2018 Bochkanov Sergey *************************************************************************/
void cmatrixlefttrsm(const ae_int_t m, const ae_int_t n, const complex_2d_array &a, const ae_int_t i1, const ae_int_t j1, const bool isupper, const bool isunit, const ae_int_t optype, complex_2d_array &x, const ae_int_t i2, const ae_int_t j2, const xparams _xparams = alglib::xdefault);
/************************************************************************* Matrix-vector product: y := op(A)*x INPUT PARAMETERS: M - number of rows of op(A) M>=0 N - number of columns of op(A) N>=0 A - target matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpA - operation type: * OpA=0 => op(A) = A * OpA=1 => op(A) = A^T * OpA=2 => op(A) = A^H X - input vector IX - subvector offset IY - subvector offset Y - preallocated matrix, must be large enough to store result OUTPUT PARAMETERS: Y - vector which stores result if M=0, then subroutine does nothing. if N=0, Y is filled by zeros. -- ALGLIB routine -- 28.01.2010 Bochkanov Sergey *************************************************************************/
void cmatrixmv(const ae_int_t m, const ae_int_t n, const complex_2d_array &a, const ae_int_t ia, const ae_int_t ja, const ae_int_t opa, const complex_1d_array &x, const ae_int_t ix, complex_1d_array &y, const ae_int_t iy, const xparams _xparams = alglib::xdefault);
/************************************************************************* Rank-1 correction: A := A + u*v' INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset *************************************************************************/
void cmatrixrank1(const ae_int_t m, const ae_int_t n, complex_2d_array &a, const ae_int_t ia, const ae_int_t ja, const complex_1d_array &u, const ae_int_t iu, const complex_1d_array &v, const ae_int_t iv, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates X*op(A^-1) where: * X is MxN general matrix * A is NxN upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition, conjugate transposition Multiplication result replaces X. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+N-1,J1:J1+N-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition * 2 - conjugate transposition X - matrix, actial matrix is stored in X[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 20.01.2018 Bochkanov Sergey *************************************************************************/
void cmatrixrighttrsm(const ae_int_t m, const ae_int_t n, const complex_2d_array &a, const ae_int_t i1, const ae_int_t j1, const bool isupper, const bool isunit, const ae_int_t optype, complex_2d_array &x, const ae_int_t i2, const ae_int_t j2, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine is an older version of CMatrixHERK(), one with wrong name (it is HErmitian update, not SYmmetric). It is left here for backward compatibility. -- ALGLIB routine -- 16.12.2009 Bochkanov Sergey *************************************************************************/
void cmatrixsyrk(const ae_int_t n, const ae_int_t k, const double alpha, const complex_2d_array &a, const ae_int_t ia, const ae_int_t ja, const ae_int_t optypea, const double beta, complex_2d_array &c, const ae_int_t ic, const ae_int_t jc, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Cache-oblivous complex "copy-and-transpose" Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix, must be large enough to store result IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/
void cmatrixtranspose(const ae_int_t m, const ae_int_t n, const complex_2d_array &a, const ae_int_t ia, const ae_int_t ja, complex_2d_array &b, const ae_int_t ib, const ae_int_t jb, const xparams _xparams = alglib::xdefault);
/************************************************************************* Copy Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix, must be large enough to store result IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/
void rmatrixcopy(const ae_int_t m, const ae_int_t n, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, real_2d_array &b, const ae_int_t ib, const ae_int_t jb, const xparams _xparams = alglib::xdefault);
/************************************************************************* This code enforces symmetricy of the matrix by copying Upper part to lower one (or vice versa). INPUT PARAMETERS: A - matrix N - number of rows/columns IsUpper - whether we want to copy upper triangle to lower one (True) or vice versa (False). *************************************************************************/
void rmatrixenforcesymmetricity(real_2d_array &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates C = alpha*op1(A)*op2(B) +beta*C where: * C is MxN general matrix * op1(A) is MxK matrix * op2(B) is KxN matrix * "op" may be identity transformation, transposition Additional info: * cache-oblivious algorithm is used. * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. IMPORTANT: This function does NOT preallocate output matrix C, it MUST be preallocated by caller prior to calling this function. In case C does not have enough space to store result, exception will be generated. INPUT PARAMETERS M - matrix size, M>0 N - matrix size, N>0 K - matrix size, K>0 Alpha - coefficient A - matrix IA - submatrix offset JA - submatrix offset OpTypeA - transformation type: * 0 - no transformation * 1 - transposition B - matrix IB - submatrix offset JB - submatrix offset OpTypeB - transformation type: * 0 - no transformation * 1 - transposition Beta - coefficient C - PREALLOCATED output matrix, large enough to store result IC - submatrix offset JC - submatrix offset ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 2009-2019 Bochkanov Sergey *************************************************************************/
void rmatrixgemm(const ae_int_t m, const ae_int_t n, const ae_int_t k, const double alpha, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const ae_int_t optypea, const real_2d_array &b, const ae_int_t ib, const ae_int_t jb, const ae_int_t optypeb, const double beta, real_2d_array &c, const ae_int_t ic, const ae_int_t jc, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* *************************************************************************/
void rmatrixgemv(const ae_int_t m, const ae_int_t n, const double alpha, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const ae_int_t opa, const real_1d_array &x, const ae_int_t ix, const double beta, real_1d_array &y, const ae_int_t iy, const xparams _xparams = alglib::xdefault);
/************************************************************************* Performs generalized copy: B := Beta*B + Alpha*A. If Beta=0, then previous contents of B is simply ignored. If Alpha=0, then A is ignored and not referenced. If both Alpha and Beta are zero, B is filled by zeros. Input parameters: M - number of rows N - number of columns Alpha- coefficient A - source matrix, MxN submatrix is copied IA - submatrix offset (row index) JA - submatrix offset (column index) Beta- coefficient B - destination matrix, must be large enough to store result IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/
void rmatrixgencopy(const ae_int_t m, const ae_int_t n, const double alpha, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const double beta, real_2d_array &b, const ae_int_t ib, const ae_int_t jb, const xparams _xparams = alglib::xdefault);
/************************************************************************* Rank-1 correction: A := A + alpha*u*v' NOTE: this function expects A to be large enough to store result. No automatic preallocation happens for smaller arrays. No integrity checks is performed for sizes of A, u, v. INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) Alpha- coefficient U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset -- ALGLIB routine -- 16.10.2017 Bochkanov Sergey *************************************************************************/
void rmatrixger(const ae_int_t m, const ae_int_t n, real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const double alpha, const real_1d_array &u, const ae_int_t iu, const real_1d_array &v, const ae_int_t iv, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates op(A^-1)*X where: * X is MxN general matrix * A is MxM upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition Multiplication result replaces X. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+M-1,J1:J1+M-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition X - matrix, actial matrix is stored in X[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 15.12.2009-22.01.2018 Bochkanov Sergey *************************************************************************/
void rmatrixlefttrsm(const ae_int_t m, const ae_int_t n, const real_2d_array &a, const ae_int_t i1, const ae_int_t j1, const bool isupper, const bool isunit, const ae_int_t optype, real_2d_array &x, const ae_int_t i2, const ae_int_t j2, const xparams _xparams = alglib::xdefault);
/************************************************************************* IMPORTANT: this function is deprecated since ALGLIB 3.13. Use RMatrixGEMV() which is more generic version of this function. Matrix-vector product: y := op(A)*x INPUT PARAMETERS: M - number of rows of op(A) N - number of columns of op(A) A - target matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpA - operation type: * OpA=0 => op(A) = A * OpA=1 => op(A) = A^T X - input vector IX - subvector offset IY - subvector offset Y - preallocated matrix, must be large enough to store result OUTPUT PARAMETERS: Y - vector which stores result if M=0, then subroutine does nothing. if N=0, Y is filled by zeros. -- ALGLIB routine -- 28.01.2010 Bochkanov Sergey *************************************************************************/
void rmatrixmv(const ae_int_t m, const ae_int_t n, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const ae_int_t opa, const real_1d_array &x, const ae_int_t ix, real_1d_array &y, const ae_int_t iy, const xparams _xparams = alglib::xdefault);
/************************************************************************* IMPORTANT: this function is deprecated since ALGLIB 3.13. Use RMatrixGER() which is more generic version of this function. Rank-1 correction: A := A + u*v' INPUT PARAMETERS: M - number of rows N - number of columns A - target matrix, MxN submatrix is updated IA - submatrix offset (row index) JA - submatrix offset (column index) U - vector #1 IU - subvector offset V - vector #2 IV - subvector offset *************************************************************************/
void rmatrixrank1(const ae_int_t m, const ae_int_t n, real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const real_1d_array &u, const ae_int_t iu, const real_1d_array &v, const ae_int_t iv, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates X*op(A^-1) where: * X is MxN general matrix * A is NxN upper/lower triangular/unitriangular matrix * "op" may be identity transformation, transposition Multiplication result replaces X. INPUT PARAMETERS N - matrix size, N>=0 M - matrix size, N>=0 A - matrix, actial matrix is stored in A[I1:I1+N-1,J1:J1+N-1] I1 - submatrix offset J1 - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition X - matrix, actial matrix is stored in X[I2:I2+M-1,J2:J2+N-1] I2 - submatrix offset J2 - submatrix offset ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 15.12.2009-22.01.2018 Bochkanov Sergey *************************************************************************/
void rmatrixrighttrsm(const ae_int_t m, const ae_int_t n, const real_2d_array &a, const ae_int_t i1, const ae_int_t j1, const bool isupper, const bool isunit, const ae_int_t optype, real_2d_array &x, const ae_int_t i2, const ae_int_t j2, const xparams _xparams = alglib::xdefault);
/************************************************************************* *************************************************************************/
void rmatrixsymv(const ae_int_t n, const double alpha, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const bool isupper, const real_1d_array &x, const ae_int_t ix, const double beta, real_1d_array &y, const ae_int_t iy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates C=alpha*A*A^T+beta*C or C=alpha*A^T*A+beta*C where: * C is NxN symmetric matrix given by its upper/lower triangle * A is NxK matrix when A*A^T is calculated, KxN matrix otherwise Additional info: * multiplication result replaces C. If Beta=0, C elements are not used in calculations (not multiplied by zero - just not referenced) * if Alpha=0, A is not used (not multiplied by zero - just not referenced) * if both Beta and Alpha are zero, C is filled by zeros. INPUT PARAMETERS N - matrix size, N>=0 K - matrix size, K>=0 Alpha - coefficient A - matrix IA - submatrix offset (row index) JA - submatrix offset (column index) OpTypeA - multiplication type: * 0 - A*A^T is calculated * 2 - A^T*A is calculated Beta - coefficient C - preallocated input/output matrix IC - submatrix offset (row index) JC - submatrix offset (column index) IsUpper - whether C is upper triangular or lower triangular ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 16.12.2009-22.01.2018 Bochkanov Sergey *************************************************************************/
void rmatrixsyrk(const ae_int_t n, const ae_int_t k, const double alpha, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const ae_int_t optypea, const double beta, real_2d_array &c, const ae_int_t ic, const ae_int_t jc, const bool isupper, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* *************************************************************************/
double rmatrixsyvmv(const ae_int_t n, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const bool isupper, const real_1d_array &x, const ae_int_t ix, real_1d_array &tmp, const xparams _xparams = alglib::xdefault);
/************************************************************************* Cache-oblivous real "copy-and-transpose" Input parameters: M - number of rows N - number of columns A - source matrix, MxN submatrix is copied and transposed IA - submatrix offset (row index) JA - submatrix offset (column index) B - destination matrix, must be large enough to store result IB - submatrix offset (row index) JB - submatrix offset (column index) *************************************************************************/
void rmatrixtranspose(const ae_int_t m, const ae_int_t n, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, real_2d_array &b, const ae_int_t ib, const ae_int_t jb, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine solves linear system op(A)*x=b where: * A is NxN upper/lower triangular/unitriangular matrix * X and B are Nx1 vectors * "op" may be identity transformation or transposition Solution replaces X. IMPORTANT: * no overflow/underflow/denegeracy tests is performed. * no integrity checks for operand sizes, out-of-bounds accesses and so on is performed INPUT PARAMETERS N - matrix size, N>=0 A - matrix, actial matrix is stored in A[IA:IA+N-1,JA:JA+N-1] IA - submatrix offset JA - submatrix offset IsUpper - whether matrix is upper triangular IsUnit - whether matrix is unitriangular OpType - transformation type: * 0 - no transformation * 1 - transposition X - right part, actual vector is stored in X[IX:IX+N-1] IX - offset OUTPUT PARAMETERS X - solution replaces elements X[IX:IX+N-1] -- ALGLIB routine / remastering of LAPACK's DTRSV -- (c) 2017 Bochkanov Sergey - converted to ALGLIB (c) 2016 Reference BLAS level1 routine (LAPACK version 3.7.0) Reference BLAS is a software package provided by Univ. of Tennessee, Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd. *************************************************************************/
void rmatrixtrsv(const ae_int_t n, const real_2d_array &a, const ae_int_t ia, const ae_int_t ja, const bool isupper, const bool isunit, const ae_int_t optype, real_1d_array &x, const ae_int_t ix, const xparams _xparams = alglib::xdefault);
/************************************************************************* Copy Input parameters: N - subvector size A - source vector, N elements are copied IA - source offset (first element index) B - destination vector, must be large enough to store result IB - destination offset (first element index) *************************************************************************/
void rvectorcopy(const ae_int_t n, const real_1d_array &a, const ae_int_t ia, real_1d_array &b, const ae_int_t ib, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_2d_array a = "[[2,1],[1,3]]";
        real_2d_array b = "[[2,1],[0,1]]";
        real_2d_array c = "[[0,0],[0,0]]";

        //
        // rmatrixgemm() function allows us to calculate matrix product C:=A*B or
        // to perform more general operation, C:=alpha*op1(A)*op2(B)+beta*C,
        // where A, B, C are rectangular matrices, op(X) can be X or X^T,
        // alpha and beta are scalars.
        //
        // This function:
        // * can apply transposition and/or multiplication by scalar to operands
        // * can use arbitrary part of matrices A/B (given by submatrix offset)
        // * can store result into arbitrary part of C
        // * for performance reasons requires C to be preallocated
        //
        // Parameters of this function are:
        // * M, N, K            -   sizes of op1(A) (which is MxK), op2(B) (which
        //                          is KxN) and C (which is MxN)
        // * Alpha              -   coefficient before A*B
        // * A, IA, JA          -   matrix A and offset of the submatrix
        // * OpTypeA            -   transformation type:
        //                          0 - no transformation
        //                          1 - transposition
        // * B, IB, JB          -   matrix B and offset of the submatrix
        // * OpTypeB            -   transformation type:
        //                          0 - no transformation
        //                          1 - transposition
        // * Beta               -   coefficient before C
        // * C, IC, JC          -   preallocated matrix C and offset of the submatrix
        //
        // Below we perform simple product C:=A*B (alpha=1, beta=0)
        //
        // IMPORTANT: this function works with preallocated C, which must be large
        //            enough to store multiplication result.
        //
        ae_int_t m = 2;
        ae_int_t n = 2;
        ae_int_t k = 2;
        double alpha = 1.0;
        ae_int_t ia = 0;
        ae_int_t ja = 0;
        ae_int_t optypea = 0;
        ae_int_t ib = 0;
        ae_int_t jb = 0;
        ae_int_t optypeb = 0;
        double beta = 0.0;
        ae_int_t ic = 0;
        ae_int_t jc = 0;
        rmatrixgemm(m, n, k, alpha, a, ia, ja, optypea, b, ib, jb, optypeb, beta, c, ic, jc);
        printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[4,3],[2,4]]

        //
        // Now we try to apply some simple transformation to operands: C:=A*B^T
        //
        optypeb = 1;
        rmatrixgemm(m, n, k, alpha, a, ia, ja, optypea, b, ib, jb, optypeb, beta, c, ic, jc);
        printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[5,1],[5,3]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // rmatrixsyrk() function allows us to calculate symmetric rank-K update
        // C := beta*C + alpha*A'*A, where C is square N*N matrix, A is square K*N
        // matrix, alpha and beta are scalars. It is also possible to update by
        // adding A*A' instead of A'*A.
        //
        // Parameters of this function are:
        // * N, K       -   matrix size
        // * Alpha      -   coefficient before A
        // * A, IA, JA  -   matrix and submatrix offsets
        // * OpTypeA    -   multiplication type:
        //                  * 0 - A*A^T is calculated
        //                  * 2 - A^T*A is calculated
        // * Beta       -   coefficient before C
        // * C, IC, JC  -   preallocated input/output matrix and submatrix offsets
        // * IsUpper    -   whether upper or lower triangle of C is updated;
        //                  this function updates only one half of C, leaving
        //                  other half unchanged (not referenced at all).
        //
        // Below we will show how to calculate simple product C:=A'*A
        //
        // NOTE: beta=0 and we do not use previous value of C, but still it
        //       MUST be preallocated.
        //
        ae_int_t n = 2;
        ae_int_t k = 1;
        double alpha = 1.0;
        ae_int_t ia = 0;
        ae_int_t ja = 0;
        ae_int_t optypea = 2;
        double beta = 0.0;
        ae_int_t ic = 0;
        ae_int_t jc = 0;
        bool isupper = true;
        real_2d_array a = "[[1,2]]";

        // preallocate space to store result
        real_2d_array c = "[[0,0],[0,0]]";

        // calculate product, store result into upper part of c
        rmatrixsyrk(n, k, alpha, a, ia, ja, optypea, beta, c, ic, jc, isupper);

        // output result.
        // IMPORTANT: lower triangle of C was NOT updated!
        printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [[1,2],[0,4]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

airy
/************************************************************************* Airy function Solution of the differential equation y"(x) = xy. The function returns the two independent solutions Ai, Bi and their first derivatives Ai'(x), Bi'(x). Evaluation is by power series summation for small x, by rational minimax approximations for large x. ACCURACY: Error criterion is absolute when function <= 1, relative when function > 1, except * denotes relative error criterion. For large negative x, the absolute error increases as x^1.5. For large positive x, the relative error increases as x^1.5. Arithmetic domain function # trials peak rms IEEE -10, 0 Ai 10000 1.6e-15 2.7e-16 IEEE 0, 10 Ai 10000 2.3e-14* 1.8e-15* IEEE -10, 0 Ai' 10000 4.6e-15 7.6e-16 IEEE 0, 10 Ai' 10000 1.8e-14* 1.5e-15* IEEE -10, 10 Bi 30000 4.2e-15 5.3e-16 IEEE -10, 10 Bi' 30000 4.9e-15 7.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
void airy(const double x, double &ai, double &aip, double &bi, double &bip, const xparams _xparams = alglib::xdefault);
autogkreport
autogkstate
autogkintegrate
autogkiteration
autogkresults
autogksingular
autogksmooth
autogksmoothw
autogk_d1 Integrating f=exp(x) by adaptive integrator
/************************************************************************* Integration report: * TerminationType = completetion code: * -5 non-convergence of Gauss-Kronrod nodes calculation subroutine. * -1 incorrect parameters were specified * 1 OK * Rep.NFEV countains number of function calculations * Rep.NIntervals contains number of intervals [a,b] was partitioned into. *************************************************************************/
class autogkreport { public: autogkreport(); autogkreport(const autogkreport &rhs); autogkreport& operator=(const autogkreport &rhs); virtual ~autogkreport(); ae_int_t terminationtype; ae_int_t nfev; ae_int_t nintervals; };
/************************************************************************* This structure stores state of the integration algorithm. Although this class has public fields, they are not intended for external use. You should use ALGLIB functions to work with this class: * autogksmooth()/AutoGKSmoothW()/... to create objects * autogkintegrate() to begin integration * autogkresults() to get results *************************************************************************/
class autogkstate { public: autogkstate(); autogkstate(const autogkstate &rhs); autogkstate& operator=(const autogkstate &rhs); virtual ~autogkstate(); };
/************************************************************************* This function is used to start iterations of the 1-dimensional integrator It accepts following parameters: func - callback which calculates f(x) for given x ptr - optional pointer which is passed to func; can be NULL -- ALGLIB -- Copyright 07.05.2009 by Bochkanov Sergey *************************************************************************/
void autogkintegrate(autogkstate &state, void (*func)(double x, double xminusa, double bminusx, double &y, void *ptr), void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool autogkiteration(autogkstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Adaptive integration results Called after AutoGKIteration returned False. Input parameters: State - algorithm state (used by AutoGKIteration). Output parameters: V - integral(f(x)dx,a,b) Rep - optimization report (see AutoGKReport description) -- ALGLIB -- Copyright 14.11.2007 by Bochkanov Sergey *************************************************************************/
void autogkresults(const autogkstate &state, double &v, autogkreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Integration on a finite interval [A,B]. Integrand have integrable singularities at A/B. F(X) must diverge as "(x-A)^alpha" at A, as "(B-x)^beta" at B, with known alpha/beta (alpha>-1, beta>-1). If alpha/beta are not known, estimates from below can be used (but these estimates should be greater than -1 too). One of alpha/beta variables (or even both alpha/beta) may be equal to 0, which means than function F(x) is non-singular at A/B. Anyway (singular at bounds or not), function F(x) is supposed to be continuous on (A,B). Fast-convergent algorithm based on a Gauss-Kronrod formula is used. Result is calculated with accuracy close to the machine precision. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) Alpha - power-law coefficient of the F(x) at A, Alpha>-1 Beta - power-law coefficient of the F(x) at B, Beta>-1 OUTPUT PARAMETERS State - structure which stores algorithm state SEE ALSO AutoGKSmooth, AutoGKSmoothW, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/
void autogksingular(const double a, const double b, const double alpha, const double beta, autogkstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Integration of a smooth function F(x) on a finite interval [a,b]. Fast-convergent algorithm based on a Gauss-Kronrod formula is used. Result is calculated with accuracy close to the machine precision. Algorithm works well only with smooth integrands. It may be used with continuous non-smooth integrands, but with less performance. It should never be used with integrands which have integrable singularities at lower or upper limits - algorithm may crash. Use AutoGKSingular in such cases. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) OUTPUT PARAMETERS State - structure which stores algorithm state SEE ALSO AutoGKSmoothW, AutoGKSingular, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/
void autogksmooth(const double a, const double b, autogkstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Integration of a smooth function F(x) on a finite interval [a,b]. This subroutine is same as AutoGKSmooth(), but it guarantees that interval [a,b] is partitioned into subintervals which have width at most XWidth. Subroutine can be used when integrating nearly-constant function with narrow "bumps" (about XWidth wide). If "bumps" are too narrow, AutoGKSmooth subroutine can overlook them. INPUT PARAMETERS: A, B - interval boundaries (A<B, A=B or A>B) OUTPUT PARAMETERS State - structure which stores algorithm state SEE ALSO AutoGKSmooth, AutoGKSingular, AutoGKResults. -- ALGLIB -- Copyright 06.05.2009 by Bochkanov Sergey *************************************************************************/
void autogksmoothw(const double a, const double b, const double xwidth, autogkstate &state, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "integration.h"

using namespace alglib;
void int_function_1_func(double x, double xminusa, double bminusx, double &y, void *ptr) 
{
    // this callback calculates f(x)=exp(x)
    y = exp(x);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates integration of f=exp(x) on [0,1]:
        // * first, autogkstate is initialized
        // * then we call integration function
        // * and finally we obtain results with autogkresults() call
        //
        double a = 0;
        double b = 1;
        autogkstate s;
        double v;
        autogkreport rep;

        autogksmooth(a, b, s);
        alglib::autogkintegrate(s, int_function_1_func);
        autogkresults(s, v, rep);

        printf("%.2f\n", double(v)); // EXPECTED: 1.7182
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

cov2
covm
covm2
pearsoncorr2
pearsoncorrelation
pearsoncorrm
pearsoncorrm2
rankdata
rankdatacentered
sampleadev
samplekurtosis
samplemean
samplemedian
samplemoments
samplepercentile
sampleskewness
samplevariance
spearmancorr2
spearmancorrm
spearmancorrm2
spearmanrankcorrelation
basestat_d_base Basic functionality (moments, adev, median, percentile)
basestat_d_c2 Correlation (covariance) between two random variables
basestat_d_cm Correlation (covariance) between components of random vector
basestat_d_cm2 Correlation (covariance) between two random vectors
/************************************************************************* 2-sample covariance Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only N leading elements of X/Y are processed * if not given, automatically determined from input sizes Result: covariance (zero for N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
double cov2(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const xparams _xparams = alglib::xdefault); double cov2(const real_1d_array &x, const real_1d_array &y, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Covariance matrix ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - array[N,M], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X are used * if not given, automatically determined from input size M - M>0, number of variables: * if given, only leading M columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M,M], covariance matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
void covm(const real_2d_array &x, const ae_int_t n, const ae_int_t m, real_2d_array &c, const xparams _xparams = alglib::xdefault); void covm(const real_2d_array &x, real_2d_array &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Cross-covariance matrix ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - array[N,M1], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation Y - array[N,M2], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X/Y are used * if not given, automatically determined from input sizes M1 - M1>0, number of variables in X: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size M2 - M2>0, number of variables in Y: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M1,M2], cross-covariance matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
void covm2(const real_2d_array &x, const real_2d_array &y, const ae_int_t n, const ae_int_t m1, const ae_int_t m2, real_2d_array &c, const xparams _xparams = alglib::xdefault); void covm2(const real_2d_array &x, const real_2d_array &y, real_2d_array &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Pearson product-moment correlation coefficient Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only N leading elements of X/Y are processed * if not given, automatically determined from input sizes Result: Pearson product-moment correlation coefficient (zero for N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
double pearsoncorr2(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const xparams _xparams = alglib::xdefault); double pearsoncorr2(const real_1d_array &x, const real_1d_array &y, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Obsolete function, we recommend to use PearsonCorr2(). -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
double pearsoncorrelation(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Pearson product-moment correlation matrix ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - array[N,M], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X are used * if not given, automatically determined from input size M - M>0, number of variables: * if given, only leading M columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M,M], correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
void pearsoncorrm(const real_2d_array &x, const ae_int_t n, const ae_int_t m, real_2d_array &c, const xparams _xparams = alglib::xdefault); void pearsoncorrm(const real_2d_array &x, real_2d_array &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Pearson product-moment cross-correlation matrix ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - array[N,M1], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation Y - array[N,M2], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X/Y are used * if not given, automatically determined from input sizes M1 - M1>0, number of variables in X: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size M2 - M2>0, number of variables in Y: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M1,M2], cross-correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
void pearsoncorrm2(const real_2d_array &x, const real_2d_array &y, const ae_int_t n, const ae_int_t m1, const ae_int_t m2, real_2d_array &c, const xparams _xparams = alglib::xdefault); void pearsoncorrm2(const real_2d_array &x, const real_2d_array &y, real_2d_array &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function replaces data in XY by their ranks: * XY is processed row-by-row * rows are processed separately * tied data are correctly handled (tied ranks are calculated) * ranking starts from 0, ends at NFeatures-1 * sum of within-row values is equal to (NFeatures-1)*NFeatures/2 ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: XY - array[NPoints,NFeatures], dataset NPoints - number of points NFeatures- number of features OUTPUT PARAMETERS: XY - data are replaced by their within-row ranks; ranking starts from 0, ends at NFeatures-1 -- ALGLIB -- Copyright 18.04.2013 by Bochkanov Sergey *************************************************************************/
void rankdata(real_2d_array &xy, const ae_int_t npoints, const ae_int_t nfeatures, const xparams _xparams = alglib::xdefault); void rankdata(real_2d_array &xy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function replaces data in XY by their CENTERED ranks: * XY is processed row-by-row * rows are processed separately * tied data are correctly handled (tied ranks are calculated) * centered ranks are just usual ranks, but centered in such way that sum of within-row values is equal to 0.0. * centering is performed by subtracting mean from each row, i.e it changes mean value, but does NOT change higher moments ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: XY - array[NPoints,NFeatures], dataset NPoints - number of points NFeatures- number of features OUTPUT PARAMETERS: XY - data are replaced by their within-row ranks; ranking starts from 0, ends at NFeatures-1 -- ALGLIB -- Copyright 18.04.2013 by Bochkanov Sergey *************************************************************************/
void rankdatacentered(real_2d_array &xy, const ae_int_t npoints, const ae_int_t nfeatures, const xparams _xparams = alglib::xdefault); void rankdatacentered(real_2d_array &xy, const xparams _xparams = alglib::xdefault);
/************************************************************************* ADev Input parameters: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X Output parameters: ADev- ADev -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
void sampleadev(const real_1d_array &x, const ae_int_t n, double &adev, const xparams _xparams = alglib::xdefault); void sampleadev(const real_1d_array &x, double &adev, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Calculation of the kurtosis. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X NOTE: This function return result which calculated by 'SampleMoments' function and stored at 'Kurtosis' variable. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
double samplekurtosis(const real_1d_array &x, const ae_int_t n, const xparams _xparams = alglib::xdefault); double samplekurtosis(const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Calculation of the mean. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X NOTE: This function return result which calculated by 'SampleMoments' function and stored at 'Mean' variable. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
double samplemean(const real_1d_array &x, const ae_int_t n, const xparams _xparams = alglib::xdefault); double samplemean(const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Median calculation. Input parameters: X - sample (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X Output parameters: Median -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
void samplemedian(const real_1d_array &x, const ae_int_t n, double &median, const xparams _xparams = alglib::xdefault); void samplemedian(const real_1d_array &x, double &median, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Calculation of the distribution moments: mean, variance, skewness, kurtosis. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X OUTPUT PARAMETERS Mean - mean. Variance- variance. Skewness- skewness (if variance<>0; zero otherwise). Kurtosis- kurtosis (if variance<>0; zero otherwise). NOTE: variance is calculated by dividing sum of squares by N-1, not N. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
void samplemoments(const real_1d_array &x, const ae_int_t n, double &mean, double &variance, double &skewness, double &kurtosis, const xparams _xparams = alglib::xdefault); void samplemoments(const real_1d_array &x, double &mean, double &variance, double &skewness, double &kurtosis, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Percentile calculation. Input parameters: X - sample (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X P - percentile (0<=P<=1) Output parameters: V - percentile -- ALGLIB -- Copyright 01.03.2008 by Bochkanov Sergey *************************************************************************/
void samplepercentile(const real_1d_array &x, const ae_int_t n, const double p, double &v, const xparams _xparams = alglib::xdefault); void samplepercentile(const real_1d_array &x, const double p, double &v, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Calculation of the skewness. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X NOTE: This function return result which calculated by 'SampleMoments' function and stored at 'Skewness' variable. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
double sampleskewness(const real_1d_array &x, const ae_int_t n, const xparams _xparams = alglib::xdefault); double sampleskewness(const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Calculation of the variance. INPUT PARAMETERS: X - sample N - N>=0, sample size: * if given, only leading N elements of X are processed * if not given, automatically determined from size of X NOTE: This function return result which calculated by 'SampleMoments' function and stored at 'Variance' variable. -- ALGLIB -- Copyright 06.09.2006 by Bochkanov Sergey *************************************************************************/
double samplevariance(const real_1d_array &x, const ae_int_t n, const xparams _xparams = alglib::xdefault); double samplevariance(const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Spearman's rank correlation coefficient Input parameters: X - sample 1 (array indexes: [0..N-1]) Y - sample 2 (array indexes: [0..N-1]) N - N>=0, sample size: * if given, only N leading elements of X/Y are processed * if not given, automatically determined from input sizes Result: Spearman's rank correlation coefficient (zero for N=0 or N=1) -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
double spearmancorr2(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const xparams _xparams = alglib::xdefault); double spearmancorr2(const real_1d_array &x, const real_1d_array &y, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Spearman's rank correlation matrix ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - array[N,M], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X are used * if not given, automatically determined from input size M - M>0, number of variables: * if given, only leading M columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M,M], correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
void spearmancorrm(const real_2d_array &x, const ae_int_t n, const ae_int_t m, real_2d_array &c, const xparams _xparams = alglib::xdefault); void spearmancorrm(const real_2d_array &x, real_2d_array &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Spearman's rank cross-correlation matrix ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: X - array[N,M1], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation Y - array[N,M2], sample matrix: * J-th column corresponds to J-th variable * I-th row corresponds to I-th observation N - N>=0, number of observations: * if given, only leading N rows of X/Y are used * if not given, automatically determined from input sizes M1 - M1>0, number of variables in X: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size M2 - M2>0, number of variables in Y: * if given, only leading M1 columns of X are used * if not given, automatically determined from input size OUTPUT PARAMETERS: C - array[M1,M2], cross-correlation matrix (zero if N=0 or N=1) -- ALGLIB -- Copyright 28.10.2010 by Bochkanov Sergey *************************************************************************/
void spearmancorrm2(const real_2d_array &x, const real_2d_array &y, const ae_int_t n, const ae_int_t m1, const ae_int_t m2, real_2d_array &c, const xparams _xparams = alglib::xdefault); void spearmancorrm2(const real_2d_array &x, const real_2d_array &y, real_2d_array &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Obsolete function, we recommend to use SpearmanCorr2(). -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
double spearmanrankcorrelation(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "statistics.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_1d_array x = "[0,1,4,9,16,25,36,49,64,81]";
        double mean;
        double variance;
        double skewness;
        double kurtosis;
        double adev;
        double p;
        double v;

        //
        // Here we demonstrate calculation of sample moments
        // (mean, variance, skewness, kurtosis)
        //
        samplemoments(x, mean, variance, skewness, kurtosis);
        printf("%.1f\n", double(mean)); // EXPECTED: 28.5
        printf("%.1f\n", double(variance)); // EXPECTED: 801.1667
        printf("%.1f\n", double(skewness)); // EXPECTED: 0.5751
        printf("%.1f\n", double(kurtosis)); // EXPECTED: -1.2666

        //
        // Average deviation
        //
        sampleadev(x, adev);
        printf("%.1f\n", double(adev)); // EXPECTED: 23.2

        //
        // Median and percentile
        //
        samplemedian(x, v);
        printf("%.1f\n", double(v)); // EXPECTED: 20.5
        p = 0.5;
        samplepercentile(x, p, v);
        printf("%.1f\n", double(v)); // EXPECTED: 20.5
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "statistics.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We have two samples - x and y, and want to measure dependency between them
        //
        real_1d_array x = "[0,1,4,9,16,25,36,49,64,81]";
        real_1d_array y = "[0,1,2,3,4,5,6,7,8,9]";
        double v;

        //
        // Three dependency measures are calculated:
        // * covariation
        // * Pearson correlation
        // * Spearman rank correlation
        //
        v = cov2(x, y);
        printf("%.2f\n", double(v)); // EXPECTED: 82.5
        v = pearsoncorr2(x, y);
        printf("%.2f\n", double(v)); // EXPECTED: 0.9627
        v = spearmancorr2(x, y);
        printf("%.2f\n", double(v)); // EXPECTED: 1.000
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "statistics.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // X is a sample matrix:
        // * I-th row corresponds to I-th observation
        // * J-th column corresponds to J-th variable
        //
        real_2d_array x = "[[1,0,1],[1,1,0],[-1,1,0],[-2,-1,1],[-1,0,9]]";
        real_2d_array c;

        //
        // Three dependency measures are calculated:
        // * covariation
        // * Pearson correlation
        // * Spearman rank correlation
        //
        // Result is stored into C, with C[i,j] equal to correlation
        // (covariance) between I-th and J-th variables of X.
        //
        covm(x, c);
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[1.80,0.60,-1.40],[0.60,0.70,-0.80],[-1.40,-0.80,14.70]]
        pearsoncorrm(x, c);
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[1.000,0.535,-0.272],[0.535,1.000,-0.249],[-0.272,-0.249,1.000]]
        spearmancorrm(x, c);
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[1.000,0.556,-0.306],[0.556,1.000,-0.750],[-0.306,-0.750,1.000]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "statistics.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // X and Y are sample matrices:
        // * I-th row corresponds to I-th observation
        // * J-th column corresponds to J-th variable
        //
        real_2d_array x = "[[1,0,1],[1,1,0],[-1,1,0],[-2,-1,1],[-1,0,9]]";
        real_2d_array y = "[[2,3],[2,1],[-1,6],[-9,9],[7,1]]";
        real_2d_array c;

        //
        // Three dependency measures are calculated:
        // * covariation
        // * Pearson correlation
        // * Spearman rank correlation
        //
        // Result is stored into C, with C[i,j] equal to correlation
        // (covariance) between I-th variable of X and J-th variable of Y.
        //
        covm2(x, y, c);
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[4.100,-3.250],[2.450,-1.500],[13.450,-5.750]]
        pearsoncorrm2(x, y, c);
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[0.519,-0.699],[0.497,-0.518],[0.596,-0.433]]
        spearmancorrm2(x, y, c);
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [[0.541,-0.649],[0.216,-0.433],[0.433,-0.135]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

dsoptimalsplit2
dsoptimalsplit2fast
/************************************************************************* Optimal binary classification Algorithms finds optimal (=with minimal cross-entropy) binary partition. Internal subroutine. INPUT PARAMETERS: A - array[0..N-1], variable C - array[0..N-1], class numbers (0 or 1). N - array size OUTPUT PARAMETERS: Info - completetion code: * -3, all values of A[] are same (partition is impossible) * -2, one of C[] is incorrect (<0, >1) * -1, incorrect pararemets were passed (N<=0). * 1, OK Threshold- partiton boundary. Left part contains values which are strictly less than Threshold. Right part contains values which are greater than or equal to Threshold. PAL, PBL- probabilities P(0|v<Threshold) and P(1|v<Threshold) PAR, PBR- probabilities P(0|v>=Threshold) and P(1|v>=Threshold) CVE - cross-validation estimate of cross-entropy -- ALGLIB -- Copyright 22.05.2008 by Bochkanov Sergey *************************************************************************/
void dsoptimalsplit2(const real_1d_array &a, const integer_1d_array &c, const ae_int_t n, ae_int_t &info, double &threshold, double &pal, double &pbl, double &par, double &pbr, double &cve, const xparams _xparams = alglib::xdefault);
/************************************************************************* Optimal partition, internal subroutine. Fast version. Accepts: A array[0..N-1] array of attributes array[0..N-1] C array[0..N-1] array of class labels TiesBuf array[0..N] temporaries (ties) CntBuf array[0..2*NC-1] temporaries (counts) Alpha centering factor (0<=alpha<=1, recommended value - 0.05) BufR array[0..N-1] temporaries BufI array[0..N-1] temporaries Output: Info error code (">0"=OK, "<0"=bad) RMS training set RMS error CVRMS leave-one-out RMS error Note: content of all arrays is changed by subroutine; it doesn't allocate temporaries. -- ALGLIB -- Copyright 11.12.2008 by Bochkanov Sergey *************************************************************************/
void dsoptimalsplit2fast(real_1d_array &a, integer_1d_array &c, integer_1d_array &tiesbuf, integer_1d_array &cntbuf, real_1d_array &bufr, integer_1d_array &bufi, const ae_int_t n, const ae_int_t nc, const double alpha, ae_int_t &info, double &threshold, double &rms, double &cvrms, const xparams _xparams = alglib::xdefault);
rmatrixbdsvd
/************************************************************************* Singular value decomposition of a bidiagonal matrix (extended algorithm) COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes one important improvement of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Generally, commercial ALGLIB is several times faster than open-source ! generic C edition, and many times faster than open-source C# edition. ! ! Multithreaded acceleration is NOT supported for this function. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. The algorithm performs the singular value decomposition of a bidiagonal matrix B (upper or lower) representing it as B = Q*S*P^T, where Q and P - orthogonal matrices, S - diagonal matrix with non-negative elements on the main diagonal, in descending order. The algorithm finds singular values. In addition, the algorithm can calculate matrices Q and P (more precisely, not the matrices, but their product with given matrices U and VT - U*Q and (P^T)*VT)). Of course, matrices U and VT can be of any type, including identity. Furthermore, the algorithm can calculate Q'*C (this product is calculated more effectively than U*Q, because this calculation operates with rows instead of matrix columns). The feature of the algorithm is its ability to find all singular values including those which are arbitrarily close to 0 with relative accuracy close to machine precision. If the parameter IsFractionalAccuracyRequired is set to True, all singular values will have high relative accuracy close to machine precision. If the parameter is set to False, only the biggest singular value will have relative accuracy close to machine precision. The absolute error of other singular values is equal to the absolute error of the biggest singular value. Input parameters: D - main diagonal of matrix B. Array whose index ranges within [0..N-1]. E - superdiagonal (or subdiagonal) of matrix B. Array whose index ranges within [0..N-2]. N - size of matrix B. IsUpper - True, if the matrix is upper bidiagonal. IsFractionalAccuracyRequired - THIS PARAMETER IS IGNORED SINCE ALGLIB 3.5.0 SINGULAR VALUES ARE ALWAYS SEARCHED WITH HIGH ACCURACY. U - matrix to be multiplied by Q. Array whose indexes range within [0..NRU-1, 0..N-1]. The matrix can be bigger, in that case only the submatrix [0..NRU-1, 0..N-1] will be multiplied by Q. NRU - number of rows in matrix U. C - matrix to be multiplied by Q'. Array whose indexes range within [0..N-1, 0..NCC-1]. The matrix can be bigger, in that case only the submatrix [0..N-1, 0..NCC-1] will be multiplied by Q'. NCC - number of columns in matrix C. VT - matrix to be multiplied by P^T. Array whose indexes range within [0..N-1, 0..NCVT-1]. The matrix can be bigger, in that case only the submatrix [0..N-1, 0..NCVT-1] will be multiplied by P^T. NCVT - number of columns in matrix VT. Output parameters: D - singular values of matrix B in descending order. U - if NRU>0, contains matrix U*Q. VT - if NCVT>0, contains matrix (P^T)*VT. C - if NCC>0, contains matrix Q'*C. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). NOTE: multiplication U*Q is performed by means of transposition to internal buffer, multiplication and backward transposition. It helps to avoid costly columnwise operations and speed-up algorithm. Additional information: The type of convergence is controlled by the internal parameter TOL. If the parameter is greater than 0, the singular values will have relative accuracy TOL. If TOL<0, the singular values will have absolute accuracy ABS(TOL)*norm(B). By default, |TOL| falls within the range of 10*Epsilon and 100*Epsilon, where Epsilon is the machine precision. It is not recommended to use TOL less than 10*Epsilon since this will considerably slow down the algorithm and may not lead to error decreasing. History: * 31 March, 2007. changed MAXITR from 6 to 12. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1999. *************************************************************************/
bool rmatrixbdsvd(real_1d_array &d, const real_1d_array &e, const ae_int_t n, const bool isupper, const bool isfractionalaccuracyrequired, real_2d_array &u, const ae_int_t nru, real_2d_array &c, const ae_int_t ncc, real_2d_array &vt, const ae_int_t ncvt, const xparams _xparams = alglib::xdefault);
besseli0
besseli1
besselj0
besselj1
besseljn
besselk0
besselk1
besselkn
bessely0
bessely1
besselyn
/************************************************************************* Modified Bessel function of order zero Returns modified Bessel function of order zero of the argument. The function is defined as i0(x) = j0( ix ). The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 30000 5.8e-16 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double besseli0(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modified Bessel function of order one Returns modified Bessel function of order one of the argument. The function is defined as i1(x) = -i j1( ix ). The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.9e-15 2.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double besseli1(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Bessel function of order zero Returns Bessel function of order zero of the argument. The domain is divided into the intervals [0, 5] and (5, infinity). In the first interval the following rational approximation is used: 2 2 (w - r ) (w - r ) P (w) / Q (w) 1 2 3 8 2 where w = x and the two r's are zeros of the function. In the second interval, the Hankel asymptotic expansion is employed with two rational functions of degree 6/6 and 7/7. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 60000 4.2e-16 1.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
double besselj0(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Bessel function of order one Returns Bessel function of order one of the argument. The domain is divided into the intervals [0, 8] and (8, infinity). In the first interval a 24 term Chebyshev expansion is used. In the second, the asymptotic trigonometric representation is employed using two rational functions of degree 5/5. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 30000 2.6e-16 1.1e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
double besselj1(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Bessel function of integer order Returns Bessel function of order n, where n is a (possibly negative) integer. The ratio of jn(x) to j0(x) is computed by backward recurrence. First the ratio jn/jn-1 is found by a continued fraction expansion. Then the recurrence relating successive orders is applied until j0 or j1 is reached. If n = 0 or 1 the routine for j0 or j1 is called directly. ACCURACY: Absolute error: arithmetic range # trials peak rms IEEE 0, 30 5000 4.4e-16 7.9e-17 Not suitable for large n or x. Use jv() (fractional order) instead. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double besseljn(const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modified Bessel function, second kind, order zero Returns modified Bessel function of the second kind of order zero of the argument. The range is partitioned into the two intervals [0,8] and (8, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Tested at 2000 random points between 0 and 8. Peak absolute error (relative when K0 > 1) was 1.46e-14; rms, 4.26e-15. Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.2e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double besselk0(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modified Bessel function, second kind, order one Computes the modified Bessel function of the second kind of order one of the argument. The range is partitioned into the two intervals [0,2] and (2, infinity). Chebyshev polynomial expansions are employed in each interval. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.2e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double besselk1(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modified Bessel function, second kind, integer order Returns modified Bessel function of the second kind of order n of the argument. The range is partitioned into the two intervals [0,9.55] and (9.55, infinity). An ascending power series is used in the low range, and an asymptotic expansion in the high range. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 90000 1.8e-8 3.0e-10 Error is high only near the crossover point x = 9.55 between the two expansions used. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 2000 by Stephen L. Moshier *************************************************************************/
double besselkn(const ae_int_t nn, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Bessel function of the second kind, order zero Returns Bessel function of the second kind, of order zero, of the argument. The domain is divided into the intervals [0, 5] and (5, infinity). In the first interval a rational approximation R(x) is employed to compute y0(x) = R(x) + 2 * log(x) * j0(x) / PI. Thus a call to j0() is required. In the second interval, the Hankel asymptotic expansion is employed with two rational functions of degree 6/6 and 7/7. ACCURACY: Absolute error, when y0(x) < 1; else relative error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.3e-15 1.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
double bessely0(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Bessel function of second kind of order one Returns Bessel function of the second kind of order one of the argument. The domain is divided into the intervals [0, 8] and (8, infinity). In the first interval a 25 term Chebyshev expansion is used, and a call to j1() is required. In the second, the asymptotic trigonometric representation is employed using two rational functions of degree 5/5. ACCURACY: Absolute error: arithmetic domain # trials peak rms IEEE 0, 30 30000 1.0e-15 1.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
double bessely1(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Bessel function of second kind of integer order Returns Bessel function of order n, where n is a (possibly negative) integer. The function is evaluated by forward recurrence on n, starting with values computed by the routines y0() and y1(). If n = 0 or 1 the routine for y0 or y1 is called directly. ACCURACY: Absolute error, except relative when y > 1: arithmetic domain # trials peak rms IEEE 0, 30 30000 3.4e-15 4.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double besselyn(const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
beta
/************************************************************************* Beta function - - | (a) | (b) beta( a, b ) = -----------. - | (a+b) For large arguments the logarithm of the function is evaluated using lgam(), then exponentiated. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 30000 8.1e-14 1.1e-14 Cephes Math Library Release 2.0: April, 1987 Copyright 1984, 1987 by Stephen L. Moshier *************************************************************************/
double beta(const double a, const double b, const xparams _xparams = alglib::xdefault);
binomialcdistribution
binomialdistribution
invbinomialdistribution
/************************************************************************* Complemented binomial distribution Returns the sum of the terms k+1 through n of the Binomial probability density: n -- ( n ) j n-j > ( ) p (1-p) -- ( j ) j=k+1 The terms are not summed directly; instead the incomplete beta integral is employed, according to the formula y = bdtrc( k, n, p ) = incbet( k+1, n-k, p ). The arguments must be positive, with p ranging from 0 to 1. ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 6.7e-15 8.2e-16 For p between 0 and .001: IEEE 0,100 100000 1.5e-13 2.7e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double binomialcdistribution(const ae_int_t k, const ae_int_t n, const double p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Binomial distribution Returns the sum of the terms 0 through k of the Binomial probability density: k -- ( n ) j n-j > ( ) p (1-p) -- ( j ) j=0 The terms are not summed directly; instead the incomplete beta integral is employed, according to the formula y = bdtr( k, n, p ) = incbet( n-k, k+1, 1-p ). The arguments must be positive, with p ranging from 0 to 1. ACCURACY: Tested at random points (a,b,p), with p between 0 and 1. a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 4.3e-15 2.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double binomialdistribution(const ae_int_t k, const ae_int_t n, const double p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse binomial distribution Finds the event probability p such that the sum of the terms 0 through k of the Binomial probability density is equal to the given cumulative probability y. This is accomplished using the inverse beta integral function and the relation 1 - p = incbi( n-k, k+1, y ). ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between 0.001 and 1: IEEE 0,100 100000 2.3e-14 6.4e-16 IEEE 0,10000 100000 6.6e-12 1.2e-13 For p between 10^-6 and 0.001: IEEE 0,100 100000 2.0e-12 1.3e-14 IEEE 0,10000 100000 1.5e-12 3.2e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double invbinomialdistribution(const ae_int_t k, const ae_int_t n, const double y, const xparams _xparams = alglib::xdefault);
chebyshevcalculate
chebyshevcoefficients
chebyshevsum
fromchebyshev
/************************************************************************* Calculation of the value of the Chebyshev polynomials of the first and second kinds. Parameters: r - polynomial kind, either 1 or 2. n - degree, n>=0 x - argument, -1 <= x <= 1 Result: the value of the Chebyshev polynomial at x *************************************************************************/
double chebyshevcalculate(const ae_int_t r, const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Representation of Tn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/
void chebyshevcoefficients(const ae_int_t n, real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* Summation of Chebyshev polynomials using Clenshaw's recurrence formula. This routine calculates c[0]*T0(x) + c[1]*T1(x) + ... + c[N]*TN(x) or c[0]*U0(x) + c[1]*U1(x) + ... + c[N]*UN(x) depending on the R. Parameters: r - polynomial kind, either 1 or 2. n - degree, n>=0 x - argument Result: the value of the Chebyshev polynomial at x *************************************************************************/
double chebyshevsum(const real_1d_array &c, const ae_int_t r, const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Conversion of a series of Chebyshev polynomials to a power series. Represents A[0]*T0(x) + A[1]*T1(x) + ... + A[N]*Tn(x) as B[0] + B[1]*X + ... + B[N]*X^N. Input parameters: A - Chebyshev series coefficients N - degree, N>=0 Output parameters B - power series coefficients *************************************************************************/
void fromchebyshev(const real_1d_array &a, const ae_int_t n, real_1d_array &b, const xparams _xparams = alglib::xdefault);
chisquarecdistribution
chisquaredistribution
invchisquaredistribution
/************************************************************************* Complemented Chi-square distribution Returns the area under the right hand tail (from x to infinity) of the Chi square probability density function with v degrees of freedom: inf. - 1 | | v/2-1 -t/2 P( x | v ) = ----------- | t e dt v/2 - | | 2 | (v/2) - x where x is the Chi-square variable. The incomplete gamma integral is used, according to the formula y = chdtr( v, x ) = igamc( v/2.0, x/2.0 ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double chisquarecdistribution(const double v, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Chi-square distribution Returns the area under the left hand tail (from 0 to x) of the Chi square probability density function with v degrees of freedom. x - 1 | | v/2-1 -t/2 P( x | v ) = ----------- | t e dt v/2 - | | 2 | (v/2) - 0 where x is the Chi-square variable. The incomplete gamma integral is used, according to the formula y = chdtr( v, x ) = igam( v/2.0, x/2.0 ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double chisquaredistribution(const double v, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse of complemented Chi-square distribution Finds the Chi-square argument x such that the integral from x to infinity of the Chi-square density is equal to the given cumulative probability y. This is accomplished using the inverse gamma integral function and the relation x/2 = igami( df/2, y ); ACCURACY: See inverse incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double invchisquaredistribution(const double v, const double y, const xparams _xparams = alglib::xdefault);
ahcreport
clusterizerstate
kmeansreport
clusterizercreate
clusterizergetdistances
clusterizergetkclusters
clusterizerrunahc
clusterizerrunkmeans
clusterizerseparatedbycorr
clusterizerseparatedbydist
clusterizersetahcalgo
clusterizersetdistances
clusterizersetkmeansinit
clusterizersetkmeanslimits
clusterizersetpoints
clusterizersetseed
clst_ahc Simple hierarchical clusterization with Euclidean distance function
clst_distance Clusterization with different metric types
clst_kclusters Obtaining K top clusters from clusterization tree
clst_kmeans Simple k-means clusterization
clst_linkage Clusterization with different linkage types
/************************************************************************* This structure is used to store results of the agglomerative hierarchical clustering (AHC). Following information is returned: * TerminationType - completion code: * 1 for successful completion of algorithm * -5 inappropriate combination of clustering algorithm and distance function was used. As for now, it is possible only when Ward's method is called for dataset with non-Euclidean distance function. In case negative completion code is returned, other fields of report structure are invalid and should not be used. * NPoints contains number of points in the original dataset * Z contains information about merges performed (see below). Z contains indexes from the original (unsorted) dataset and it can be used when you need to know what points were merged. However, it is not convenient when you want to build a dendrograd (see below). * if you want to build dendrogram, you can use Z, but it is not good option, because Z contains indexes from unsorted dataset. Dendrogram built from such dataset is likely to have intersections. So, you have to reorder you points before building dendrogram. Permutation which reorders point is returned in P. Another representation of merges, which is more convenient for dendorgram construction, is returned in PM. * more information on format of Z, P and PM can be found below and in the examples from ALGLIB Reference Manual. FORMAL DESCRIPTION OF FIELDS: NPoints number of points Z array[NPoints-1,2], contains indexes of clusters linked in pairs to form clustering tree. I-th row corresponds to I-th merge: * Z[I,0] - index of the first cluster to merge * Z[I,1] - index of the second cluster to merge * Z[I,0]<Z[I,1] * clusters are numbered from 0 to 2*NPoints-2, with indexes from 0 to NPoints-1 corresponding to points of the original dataset, and indexes from NPoints to 2*NPoints-2 correspond to clusters generated by subsequent merges (I-th row of Z creates cluster with index NPoints+I). IMPORTANT: indexes in Z[] are indexes in the ORIGINAL, unsorted dataset. In addition to Z algorithm outputs permutation which rearranges points in such way that subsequent merges are performed on adjacent points (such order is needed if you want to build dendrogram). However, indexes in Z are related to original, unrearranged sequence of points. P array[NPoints], permutation which reorders points for dendrogram construction. P[i] contains index of the position where we should move I-th point of the original dataset in order to apply merges PZ/PM. PZ same as Z, but for permutation of points given by P. The only thing which changed are indexes of the original points; indexes of clusters remained same. MergeDist array[NPoints-1], contains distances between clusters being merged (MergeDist[i] correspond to merge stored in Z[i,...]): * CLINK, SLINK and average linkage algorithms report "raw", unmodified distance metric. * Ward's method reports weighted intra-cluster variance, which is equal to ||Ca-Cb||^2 * Sa*Sb/(Sa+Sb). Here A and B are clusters being merged, Ca is a center of A, Cb is a center of B, Sa is a size of A, Sb is a size of B. PM array[NPoints-1,6], another representation of merges, which is suited for dendrogram construction. It deals with rearranged points (permutation P is applied) and represents merges in a form which different from one used by Z. For each I from 0 to NPoints-2, I-th row of PM represents merge performed on two clusters C0 and C1. Here: * C0 contains points with indexes PM[I,0]...PM[I,1] * C1 contains points with indexes PM[I,2]...PM[I,3] * indexes stored in PM are given for dataset sorted according to permutation P * PM[I,1]=PM[I,2]-1 (only adjacent clusters are merged) * PM[I,0]<=PM[I,1], PM[I,2]<=PM[I,3], i.e. both clusters contain at least one point * heights of "subdendrograms" corresponding to C0/C1 are stored in PM[I,4] and PM[I,5]. Subdendrograms corresponding to single-point clusters have height=0. Dendrogram of the merge result has height H=max(H0,H1)+1. NOTE: there is one-to-one correspondence between merges described by Z and PM. I-th row of Z describes same merge of clusters as I-th row of PM, with "left" cluster from Z corresponding to the "left" one from PM. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
class ahcreport { public: ahcreport(); ahcreport(const ahcreport &rhs); ahcreport& operator=(const ahcreport &rhs); virtual ~ahcreport(); ae_int_t terminationtype; ae_int_t npoints; integer_1d_array p; integer_2d_array z; integer_2d_array pz; integer_2d_array pm; real_1d_array mergedist; };
/************************************************************************* This structure is a clusterization engine. You should not try to access its fields directly. Use ALGLIB functions in order to work with this object. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
class clusterizerstate { public: clusterizerstate(); clusterizerstate(const clusterizerstate &rhs); clusterizerstate& operator=(const clusterizerstate &rhs); virtual ~clusterizerstate(); };
/************************************************************************* This structure is used to store results of the k-means clustering algorithm. Following information is always returned: * NPoints contains number of points in the original dataset * TerminationType contains completion code, negative on failure, positive on success * K contains number of clusters For positive TerminationType we return: * NFeatures contains number of variables in the original dataset * C, which contains centers found by algorithm * CIdx, which maps points of the original dataset to clusters FORMAL DESCRIPTION OF FIELDS: NPoints number of points, >=0 NFeatures number of variables, >=1 TerminationType completion code: * -5 if distance type is anything different from Euclidean metric * -3 for degenerate dataset: a) less than K distinct points, b) K=0 for non-empty dataset. * +1 for successful completion K number of clusters C array[K,NFeatures], rows of the array store centers CIdx array[NPoints], which contains cluster indexes IterationsCount actual number of iterations performed by clusterizer. If algorithm performed more than one random restart, total number of iterations is returned. Energy merit function, "energy", sum of squared deviations from cluster centers -- ALGLIB -- Copyright 27.11.2012 by Bochkanov Sergey *************************************************************************/
class kmeansreport { public: kmeansreport(); kmeansreport(const kmeansreport &rhs); kmeansreport& operator=(const kmeansreport &rhs); virtual ~kmeansreport(); ae_int_t npoints; ae_int_t nfeatures; ae_int_t terminationtype; ae_int_t iterationscount; double energy; ae_int_t k; real_2d_array c; integer_1d_array cidx; };
/************************************************************************* This function initializes clusterizer object. Newly initialized object is empty, i.e. it does not contain dataset. You should use it as follows: 1. creation 2. dataset is added with ClusterizerSetPoints() 3. additional parameters are set 3. clusterization is performed with one of the clustering functions -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizercreate(clusterizerstate &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* This function returns distance matrix for dataset INPUT PARAMETERS: XY - array[NPoints,NFeatures], dataset NPoints - number of points, >=0 NFeatures- number of features, >=1 DistType- distance function: * 0 Chebyshev distance (L-inf norm) * 1 city block distance (L1 norm) * 2 Euclidean distance (L2 norm, non-squared) * 10 Pearson correlation: dist(a,b) = 1-corr(a,b) * 11 Absolute Pearson correlation: dist(a,b) = 1-|corr(a,b)| * 12 Uncentered Pearson correlation (cosine of the angle): dist(a,b) = a'*b/(|a|*|b|) * 13 Absolute uncentered Pearson correlation dist(a,b) = |a'*b|/(|a|*|b|) * 20 Spearman rank correlation: dist(a,b) = 1-rankcorr(a,b) * 21 Absolute Spearman rank correlation dist(a,b) = 1-|rankcorr(a,b)| OUTPUT PARAMETERS: D - array[NPoints,NPoints], distance matrix (full matrix is returned, with lower and upper triangles) NOTE: different distance functions have different performance penalty: * Euclidean or Pearson correlation distances are the fastest ones * Spearman correlation distance function is a bit slower * city block and Chebyshev distances are order of magnitude slower The reason behing difference in performance is that correlation-based distance functions are computed using optimized linear algebra kernels, while Chebyshev and city block distance functions are computed using simple nested loops with two branches at each iteration. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizergetdistances(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nfeatures, const ae_int_t disttype, real_2d_array &d, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function takes as input clusterization report Rep, desired clusters count K, and builds top K clusters from hierarchical clusterization tree. It returns assignment of points to clusters (array of cluster indexes). INPUT PARAMETERS: Rep - report from ClusterizerRunAHC() performed on XY K - desired number of clusters, 1<=K<=NPoints. K can be zero only when NPoints=0. OUTPUT PARAMETERS: CIdx - array[NPoints], I-th element contains cluster index (from 0 to K-1) for I-th point of the dataset. CZ - array[K]. This array allows to convert cluster indexes returned by this function to indexes used by Rep.Z. J-th cluster returned by this function corresponds to CZ[J]-th cluster stored in Rep.Z/PZ/PM. It is guaranteed that CZ[I]<CZ[I+1]. NOTE: K clusters built by this subroutine are assumed to have no hierarchy. Although they were obtained by manipulation with top K nodes of dendrogram (i.e. hierarchical decomposition of dataset), this function does not return information about hierarchy. Each of the clusters stand on its own. NOTE: Cluster indexes returned by this function does not correspond to indexes returned in Rep.Z/PZ/PM. Either you work with hierarchical representation of the dataset (dendrogram), or you work with "flat" representation returned by this function. Each of representations has its own clusters indexing system (former uses [0, 2*NPoints-2]), while latter uses [0..K-1]), although it is possible to perform conversion from one system to another by means of CZ array, returned by this function, which allows you to convert indexes stored in CIdx to the numeration system used by Rep.Z. NOTE: this subroutine is optimized for moderate values of K. Say, for K=5 it will perform many times faster than for K=100. Its worst-case performance is O(N*K), although in average case it perform better (up to O(N*log(K))). -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizergetkclusters(const ahcreport &rep, const ae_int_t k, integer_1d_array &cidx, integer_1d_array &cz, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function performs agglomerative hierarchical clustering NOTE: Agglomerative hierarchical clustering algorithm has two phases: distance matrix calculation and clustering itself. Only first phase (distance matrix calculation) is accelerated by SIMD and SMP. Thus, acceleration is significant only for medium or high-dimensional problems. Although activating multithreading gives some speedup over single- threaded execution, you should not expect nearly-linear scaling with respect to cores count. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() OUTPUT PARAMETERS: Rep - clustering results; see description of AHCReport structure for more information. NOTE 1: hierarchical clustering algorithms require large amounts of memory. In particular, this implementation needs sizeof(double)*NPoints^2 bytes, which are used to store distance matrix. In case we work with user-supplied matrix, this amount is multiplied by 2 (we have to store original matrix and to work with its copy). For example, problem with 10000 points would require 800M of RAM, even when working in a 1-dimensional space. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizerrunahc(clusterizerstate &s, ahcreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* This function performs clustering by k-means++ algorithm. You may change algorithm properties by calling: * ClusterizerSetKMeansLimits() to change number of restarts or iterations * ClusterizerSetKMeansInit() to change initialization algorithm By default, one restart and unlimited number of iterations are used. Initialization algorithm is chosen automatically. NOTE: k-means clustering algorithm has two phases: selection of initial centers and clustering itself. ALGLIB parallelizes both phases. Parallel version is optimized for the following scenario: medium or high-dimensional problem (8 or more dimensions) with large number of points and clusters. However, some speed-up can be obtained even when assumptions above are violated. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() K - number of clusters, K>=0. K can be zero only when algorithm is called for empty dataset, in this case completion code is set to success (+1). If K=0 and dataset size is non-zero, we can not meaningfully assign points to some center (there are no centers because K=0) and return -3 as completion code (failure). OUTPUT PARAMETERS: Rep - clustering results; see description of KMeansReport structure for more information. NOTE 1: k-means clustering can be performed only for datasets with Euclidean distance function. Algorithm will return negative completion code in Rep.TerminationType in case dataset was added to clusterizer with DistType other than Euclidean (or dataset was specified by distance matrix instead of explicitly given points). NOTE 2: by default, k-means uses non-deterministic seed to initialize RNG which is used to select initial centers. As result, each run of algorithm may return different values. If you need deterministic behavior, use ClusterizerSetSeed() function. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizerrunkmeans(clusterizerstate &s, const ae_int_t k, kmeansreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function accepts AHC report Rep, desired maximum intercluster correlation and returns top clusters from hierarchical clusterization tree which are separated by correlation R or LOWER. It returns assignment of points to clusters (array of cluster indexes). There is one more function with similar name - ClusterizerSeparatedByDist, which returns clusters with intercluster distance equal to R or HIGHER (note: higher for distance, lower for correlation). INPUT PARAMETERS: Rep - report from ClusterizerRunAHC() performed on XY R - desired maximum intercluster correlation, -1<=R<=+1 OUTPUT PARAMETERS: K - number of clusters, 1<=K<=NPoints CIdx - array[NPoints], I-th element contains cluster index (from 0 to K-1) for I-th point of the dataset. CZ - array[K]. This array allows to convert cluster indexes returned by this function to indexes used by Rep.Z. J-th cluster returned by this function corresponds to CZ[J]-th cluster stored in Rep.Z/PZ/PM. It is guaranteed that CZ[I]<CZ[I+1]. NOTE: K clusters built by this subroutine are assumed to have no hierarchy. Although they were obtained by manipulation with top K nodes of dendrogram (i.e. hierarchical decomposition of dataset), this function does not return information about hierarchy. Each of the clusters stand on its own. NOTE: Cluster indexes returned by this function does not correspond to indexes returned in Rep.Z/PZ/PM. Either you work with hierarchical representation of the dataset (dendrogram), or you work with "flat" representation returned by this function. Each of representations has its own clusters indexing system (former uses [0, 2*NPoints-2]), while latter uses [0..K-1]), although it is possible to perform conversion from one system to another by means of CZ array, returned by this function, which allows you to convert indexes stored in CIdx to the numeration system used by Rep.Z. NOTE: this subroutine is optimized for moderate values of K. Say, for K=5 it will perform many times faster than for K=100. Its worst-case performance is O(N*K), although in average case it perform better (up to O(N*log(K))). -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizerseparatedbycorr(const ahcreport &rep, const double r, ae_int_t &k, integer_1d_array &cidx, integer_1d_array &cz, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function accepts AHC report Rep, desired minimum intercluster distance and returns top clusters from hierarchical clusterization tree which are separated by distance R or HIGHER. It returns assignment of points to clusters (array of cluster indexes). There is one more function with similar name - ClusterizerSeparatedByCorr, which returns clusters with intercluster correlation equal to R or LOWER (note: higher for distance, lower for correlation). INPUT PARAMETERS: Rep - report from ClusterizerRunAHC() performed on XY R - desired minimum intercluster distance, R>=0 OUTPUT PARAMETERS: K - number of clusters, 1<=K<=NPoints CIdx - array[NPoints], I-th element contains cluster index (from 0 to K-1) for I-th point of the dataset. CZ - array[K]. This array allows to convert cluster indexes returned by this function to indexes used by Rep.Z. J-th cluster returned by this function corresponds to CZ[J]-th cluster stored in Rep.Z/PZ/PM. It is guaranteed that CZ[I]<CZ[I+1]. NOTE: K clusters built by this subroutine are assumed to have no hierarchy. Although they were obtained by manipulation with top K nodes of dendrogram (i.e. hierarchical decomposition of dataset), this function does not return information about hierarchy. Each of the clusters stand on its own. NOTE: Cluster indexes returned by this function does not correspond to indexes returned in Rep.Z/PZ/PM. Either you work with hierarchical representation of the dataset (dendrogram), or you work with "flat" representation returned by this function. Each of representations has its own clusters indexing system (former uses [0, 2*NPoints-2]), while latter uses [0..K-1]), although it is possible to perform conversion from one system to another by means of CZ array, returned by this function, which allows you to convert indexes stored in CIdx to the numeration system used by Rep.Z. NOTE: this subroutine is optimized for moderate values of K. Say, for K=5 it will perform many times faster than for K=100. Its worst-case performance is O(N*K), although in average case it perform better (up to O(N*log(K))). -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizerseparatedbydist(const ahcreport &rep, const double r, ae_int_t &k, integer_1d_array &cidx, integer_1d_array &cz, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets agglomerative hierarchical clustering algorithm INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() Algo - algorithm type: * 0 complete linkage (default algorithm) * 1 single linkage * 2 unweighted average linkage * 3 weighted average linkage * 4 Ward's method NOTE: Ward's method works correctly only with Euclidean distance, that's why algorithm will return negative termination code (failure) for any other distance type. It is possible, however, to use this method with user-supplied distance matrix. It is your responsibility to pass one which was calculated with Euclidean distance function. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizersetahcalgo(clusterizerstate &s, const ae_int_t algo, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* This function adds dataset given by distance matrix to the clusterizer structure. It is important that dataset is not given explicitly - only distance matrix is given. This function overrides all previous calls of ClusterizerSetPoints() or ClusterizerSetDistances(). INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() D - array[NPoints,NPoints], distance matrix given by its upper or lower triangle (main diagonal is ignored because its entries are expected to be zero). NPoints - number of points IsUpper - whether upper or lower triangle of D is given. NOTE 1: different clustering algorithms have different limitations: * agglomerative hierarchical clustering algorithms may be used with any kind of distance metric, including one which is given by distance matrix * k-means++ clustering algorithm may be used only with Euclidean distance function and explicitly given points - it can not be used with dataset given by distance matrix Thus, if you call this function, you will be unable to use k-means clustering algorithm to process your problem. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizersetdistances(clusterizerstate &s, const real_2d_array &d, const ae_int_t npoints, const bool isupper, const xparams _xparams = alglib::xdefault); void clusterizersetdistances(clusterizerstate &s, const real_2d_array &d, const bool isupper, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets k-means initialization algorithm. Several different algorithms can be chosen, including k-means++. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() InitAlgo- initialization algorithm: * 0 automatic selection ( different versions of ALGLIB may select different algorithms) * 1 random initialization * 2 k-means++ initialization (best quality of initial centers, but long non-parallelizable initialization phase with bad cache locality) * 3 "fast-greedy" algorithm with efficient, easy to parallelize initialization. Quality of initial centers is somewhat worse than that of k-means++. This algorithm is a default one in the current version of ALGLIB. *-1 "debug" algorithm which always selects first K rows of dataset; this algorithm is used for debug purposes only. Do not use it in the industrial code! -- ALGLIB -- Copyright 21.01.2015 by Bochkanov Sergey *************************************************************************/
void clusterizersetkmeansinit(clusterizerstate &s, const ae_int_t initalgo, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets k-means properties: number of restarts and maximum number of iterations per one run. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() Restarts- restarts count, >=1. k-means++ algorithm performs several restarts and chooses best set of centers (one with minimum squared distance). MaxIts - maximum number of k-means iterations performed during one run. >=0, zero value means that algorithm performs unlimited number of iterations. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizersetkmeanslimits(clusterizerstate &s, const ae_int_t restarts, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function adds dataset to the clusterizer structure. This function overrides all previous calls of ClusterizerSetPoints() or ClusterizerSetDistances(). INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() XY - array[NPoints,NFeatures], dataset NPoints - number of points, >=0 NFeatures- number of features, >=1 DistType- distance function: * 0 Chebyshev distance (L-inf norm) * 1 city block distance (L1 norm) * 2 Euclidean distance (L2 norm), non-squared * 10 Pearson correlation: dist(a,b) = 1-corr(a,b) * 11 Absolute Pearson correlation: dist(a,b) = 1-|corr(a,b)| * 12 Uncentered Pearson correlation (cosine of the angle): dist(a,b) = a'*b/(|a|*|b|) * 13 Absolute uncentered Pearson correlation dist(a,b) = |a'*b|/(|a|*|b|) * 20 Spearman rank correlation: dist(a,b) = 1-rankcorr(a,b) * 21 Absolute Spearman rank correlation dist(a,b) = 1-|rankcorr(a,b)| NOTE 1: different distance functions have different performance penalty: * Euclidean or Pearson correlation distances are the fastest ones * Spearman correlation distance function is a bit slower * city block and Chebyshev distances are order of magnitude slower The reason behing difference in performance is that correlation-based distance functions are computed using optimized linear algebra kernels, while Chebyshev and city block distance functions are computed using simple nested loops with two branches at each iteration. NOTE 2: different clustering algorithms have different limitations: * agglomerative hierarchical clustering algorithms may be used with any kind of distance metric * k-means++ clustering algorithm may be used only with Euclidean distance function Thus, list of specific clustering algorithms you may use depends on distance function you specify when you set your dataset. -- ALGLIB -- Copyright 10.07.2012 by Bochkanov Sergey *************************************************************************/
void clusterizersetpoints(clusterizerstate &s, const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nfeatures, const ae_int_t disttype, const xparams _xparams = alglib::xdefault); void clusterizersetpoints(clusterizerstate &s, const real_2d_array &xy, const ae_int_t disttype, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* This function sets seed which is used to initialize internal RNG. By default, deterministic seed is used - same for each run of clusterizer. If you specify non-deterministic seed value, then some algorithms which depend on random initialization (in current version: k-means) may return slightly different results after each run. INPUT PARAMETERS: S - clusterizer state, initialized by ClusterizerCreate() Seed - seed: * positive values = use deterministic seed for each run of algorithms which depend on random initialization * zero or negative values = use non-deterministic seed -- ALGLIB -- Copyright 08.06.2017 by Bochkanov Sergey *************************************************************************/
void clusterizersetseed(clusterizerstate &s, const ae_int_t seed, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // The very simple clusterization example
        //
        // We have a set of points in 2D space:
        //     (P0,P1,P2,P3,P4) = ((1,1),(1,2),(4,1),(2,3),(4,1.5))
        //
        //  |
        //  |     P3
        //  |
        //  | P1          
        //  |             P4
        //  | P0          P2
        //  |-------------------------
        //
        // We want to perform Agglomerative Hierarchic Clusterization (AHC),
        // using complete linkage (default algorithm) and Euclidean distance
        // (default metric).
        //
        // In order to do that, we:
        // * create clusterizer with clusterizercreate()
        // * set points XY and metric (2=Euclidean) with clusterizersetpoints()
        // * run AHC algorithm with clusterizerrunahc
        //
        // You may see that clusterization itself is a minor part of the example,
        // most of which is dominated by comments :)
        //
        clusterizerstate s;
        ahcreport rep;
        real_2d_array xy = "[[1,1],[1,2],[4,1],[2,3],[4,1.5]]";

        clusterizercreate(s);
        clusterizersetpoints(s, xy, 2);
        clusterizerrunahc(s, rep);

        //
        // Now we've built our clusterization tree. Rep.z contains information which
        // is required to build dendrogram. I-th row of rep.z represents one merge
        // operation, with first cluster to merge having index rep.z[I,0] and second
        // one having index rep.z[I,1]. Merge result has index NPoints+I.
        //
        // Clusters with indexes less than NPoints are single-point initial clusters,
        // while ones with indexes from NPoints to 2*NPoints-2 are multi-point
        // clusters created during merges.
        //
        // In our example, Z=[[2,4], [0,1], [3,6], [5,7]]
        //
        // It means that:
        // * first, we merge C2=(P2) and C4=(P4),    and create C5=(P2,P4)
        // * then, we merge  C2=(P0) and C1=(P1),    and create C6=(P0,P1)
        // * then, we merge  C3=(P3) and C6=(P0,P1), and create C7=(P0,P1,P3)
        // * finally, we merge C5 and C7 and create C8=(P0,P1,P2,P3,P4)
        //
        // Thus, we have following dendrogram:
        //  
        //      ------8-----
        //      |          |
        //      |      ----7----
        //      |      |       |
        //   ---5---   |    ---6---
        //   |     |   |    |     |
        //   P2   P4   P3   P0   P1
        //
        printf("%s\n", rep.z.tostring().c_str()); // EXPECTED: [[2,4],[0,1],[3,6],[5,7]]

        //
        // We've built dendrogram above by reordering our dataset.
        //
        // Without such reordering it would be impossible to build dendrogram without
        // intersections. Luckily, ahcreport structure contains two additional fields
        // which help to build dendrogram from your data:
        // * rep.p, which contains permutation applied to dataset
        // * rep.pm, which contains another representation of merges 
        //
        // In our example we have:
        // * P=[3,4,0,2,1]
        // * PZ=[[0,0,1,1,0,0],[3,3,4,4,0,0],[2,2,3,4,0,1],[0,1,2,4,1,2]]
        //
        // Permutation array P tells us that P0 should be moved to position 3,
        // P1 moved to position 4, P2 moved to position 0 and so on:
        //
        //   (P0 P1 P2 P3 P4) => (P2 P4 P3 P0 P1)
        //
        // Merges array PZ tells us how to perform merges on the sorted dataset.
        // One row of PZ corresponds to one merge operations, with first pair of
        // elements denoting first of the clusters to merge (start index, end
        // index) and next pair of elements denoting second of the clusters to
        // merge. Clusters being merged are always adjacent, with first one on
        // the left and second one on the right.
        //
        // For example, first row of PZ tells us that clusters [0,0] and [1,1] are
        // merged (single-point clusters, with first one containing P2 and second
        // one containing P4). Third row of PZ tells us that we merge one single-
        // point cluster [2,2] with one two-point cluster [3,4].
        //
        // There are two more elements in each row of PZ. These are the helper
        // elements, which denote HEIGHT (not size) of left and right subdendrograms.
        // For example, according to PZ, first two merges are performed on clusterization
        // trees of height 0, while next two merges are performed on 0-1 and 1-2
        // pairs of trees correspondingly.
        //
        printf("%s\n", rep.p.tostring().c_str()); // EXPECTED: [3,4,0,2,1]
        printf("%s\n", rep.pm.tostring().c_str()); // EXPECTED: [[0,0,1,1,0,0],[3,3,4,4,0,0],[2,2,3,4,0,1],[0,1,2,4,1,2]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We have three points in 4D space:
        //     (P0,P1,P2) = ((1, 2, 1, 2), (6, 7, 6, 7), (7, 6, 7, 6))
        //
        // We want to try clustering them with different distance functions.
        // Distance function is chosen when we add dataset to the clusterizer.
        // We can choose several distance types - Euclidean, city block, Chebyshev,
        // several correlation measures or user-supplied distance matrix.
        //
        // Here we'll try three distances: Euclidean, Pearson correlation,
        // user-supplied distance matrix. Different distance functions lead
        // to different choices being made by algorithm during clustering.
        //
        clusterizerstate s;
        ahcreport rep;
        ae_int_t disttype;
        real_2d_array xy = "[[1, 2, 1, 2], [6, 7, 6, 7], [7, 6, 7, 6]]";
        clusterizercreate(s);

        // With Euclidean distance function (disttype=2) two closest points
        // are P1 and P2, thus:
        // * first, we merge P1 and P2 to form C3=[P1,P2]
        // * second, we merge P0 and C3 to form C4=[P0,P1,P2]
        disttype = 2;
        clusterizersetpoints(s, xy, disttype);
        clusterizerrunahc(s, rep);
        printf("%s\n", rep.z.tostring().c_str()); // EXPECTED: [[1,2],[0,3]]

        // With Pearson correlation distance function (disttype=10) situation
        // is different - distance between P0 and P1 is zero, thus:
        // * first, we merge P0 and P1 to form C3=[P0,P1]
        // * second, we merge P2 and C3 to form C4=[P0,P1,P2]
        disttype = 10;
        clusterizersetpoints(s, xy, disttype);
        clusterizerrunahc(s, rep);
        printf("%s\n", rep.z.tostring().c_str()); // EXPECTED: [[0,1],[2,3]]

        // Finally, we try clustering with user-supplied distance matrix:
        //     [ 0 3 1 ]
        // P = [ 3 0 3 ], where P[i,j] = dist(Pi,Pj)
        //     [ 1 3 0 ]
        //
        // * first, we merge P0 and P2 to form C3=[P0,P2]
        // * second, we merge P1 and C3 to form C4=[P0,P1,P2]
        real_2d_array d = "[[0,3,1],[3,0,3],[1,3,0]]";
        clusterizersetdistances(s, d, true);
        clusterizerrunahc(s, rep);
        printf("%s\n", rep.z.tostring().c_str()); // EXPECTED: [[0,2],[1,3]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We have a set of points in 2D space:
        //     (P0,P1,P2,P3,P4) = ((1,1),(1,2),(4,1),(2,3),(4,1.5))
        //
        //  |
        //  |     P3
        //  |
        //  | P1          
        //  |             P4
        //  | P0          P2
        //  |-------------------------
        //
        // We perform Agglomerative Hierarchic Clusterization (AHC) and we want
        // to get top K clusters from clusterization tree for different K.
        //
        clusterizerstate s;
        ahcreport rep;
        real_2d_array xy = "[[1,1],[1,2],[4,1],[2,3],[4,1.5]]";
        integer_1d_array cidx;
        integer_1d_array cz;

        clusterizercreate(s);
        clusterizersetpoints(s, xy, 2);
        clusterizerrunahc(s, rep);

        // with K=5, every points is assigned to its own cluster:
        // C0=P0, C1=P1 and so on...
        clusterizergetkclusters(rep, 5, cidx, cz);
        printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [0,1,2,3,4]

        // with K=1 we have one large cluster C0=[P0,P1,P2,P3,P4,P5]
        clusterizergetkclusters(rep, 1, cidx, cz);
        printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [0,0,0,0,0]

        // with K=3 we have three clusters C0=[P3], C1=[P2,P4], C2=[P0,P1]
        clusterizergetkclusters(rep, 3, cidx, cz);
        printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [2,2,1,0,1]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // The very simple clusterization example
        //
        // We have a set of points in 2D space:
        //     (P0,P1,P2,P3,P4) = ((1,1),(1,2),(4,1),(2,3),(4,1.5))
        //
        //  |
        //  |     P3
        //  |
        //  | P1          
        //  |             P4
        //  | P0          P2
        //  |-------------------------
        //
        // We want to perform k-means++ clustering with K=2.
        //
        // In order to do that, we:
        // * create clusterizer with clusterizercreate()
        // * set points XY and metric (must be Euclidean, distype=2) with clusterizersetpoints()
        // * (optional) set number of restarts from random positions to 5
        // * run k-means algorithm with clusterizerrunkmeans()
        //
        // You may see that clusterization itself is a minor part of the example,
        // most of which is dominated by comments :)
        //
        clusterizerstate s;
        kmeansreport rep;
        real_2d_array xy = "[[1,1],[1,2],[4,1],[2,3],[4,1.5]]";

        clusterizercreate(s);
        clusterizersetpoints(s, xy, 2);
        clusterizersetkmeanslimits(s, 5, 0);
        clusterizerrunkmeans(s, 2, rep);

        //
        // We've performed clusterization, and it succeeded (completion code is +1).
        //
        // Now first center is stored in the first row of rep.c, second one is stored
        // in the second row. rep.cidx can be used to determine which center is
        // closest to some specific point of the dataset.
        //
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1

        // We called clusterizersetpoints() with disttype=2 because k-means++
        // algorithm does NOT support metrics other than Euclidean. But what if we
        // try to use some other metric?
        //
        // We change metric type by calling clusterizersetpoints() one more time,
        // and try to run k-means algo again. It fails.
        //
        clusterizersetpoints(s, xy, 0);
        clusterizerrunkmeans(s, 2, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: -5
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We have a set of points in 1D space:
        //     (P0,P1,P2,P3,P4) = (1, 3, 10, 16, 20)
        //
        // We want to perform Agglomerative Hierarchic Clusterization (AHC),
        // using either complete or single linkage and Euclidean distance
        // (default metric).
        //
        // First two steps merge P0/P1 and P3/P4 independently of the linkage type.
        // However, third step depends on linkage type being used:
        // * in case of complete linkage P2=10 is merged with [P0,P1]
        // * in case of single linkage P2=10 is merged with [P3,P4]
        //
        clusterizerstate s;
        ahcreport rep;
        real_2d_array xy = "[[1],[3],[10],[16],[20]]";
        integer_1d_array cidx;
        integer_1d_array cz;

        clusterizercreate(s);
        clusterizersetpoints(s, xy, 2);

        // use complete linkage, reduce set down to 2 clusters.
        // print clusterization with clusterizergetkclusters(2).
        // P2 must belong to [P0,P1]
        clusterizersetahcalgo(s, 0);
        clusterizerrunahc(s, rep);
        clusterizergetkclusters(rep, 2, cidx, cz);
        printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [1,1,1,0,0]

        // use single linkage, reduce set down to 2 clusters.
        // print clusterization with clusterizergetkclusters(2).
        // P2 must belong to [P2,P3]
        clusterizersetahcalgo(s, 1);
        clusterizerrunahc(s, rep);
        clusterizergetkclusters(rep, 2, cidx, cz);
        printf("%s\n", cidx.tostring().c_str()); // EXPECTED: [0,0,1,1,1]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

convc1d
convc1dbuf
convc1dcircular
convc1dcircularbuf
convc1dcircularinv
convc1dcircularinvbuf
convc1dinv
convc1dinvbuf
convr1d
convr1dbuf
convr1dcircular
convr1dcircularbuf
convr1dcircularinv
convr1dcircularinvbuf
convr1dinv
convr1dinvbuf
/************************************************************************* 1-dimensional complex convolution. For given A/B returns conv(A,B) (non-circular). Subroutine can automatically choose between three implementations: straightforward O(M*N) formula for very small N (or M), overlap-add algorithm for cases where max(M,N) is significantly larger than min(M,N), but O(M*N) algorithm is too slow, and general FFT-based formula for cases where two previous algorithms are too slow. Algorithm has max(M,N)*log(max(M,N)) complexity for any M/N. INPUT PARAMETERS A - array[M] - complex function to be transformed M - problem size B - array[N] - complex function to be transformed N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[N+M-1] NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. NOTE: there is a buffered version of this function, ConvC1DBuf(), which can reuse space previously allocated in its output parameter R. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void convc1d(const complex_1d_array &a, const ae_int_t m, const complex_1d_array &b, const ae_int_t n, complex_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional complex convolution, buffered version of ConvC1DBuf(), which does not reallocate R[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 30.11.2023 by Bochkanov Sergey *************************************************************************/
void convc1dbuf(const complex_1d_array &a, const ae_int_t m, const complex_1d_array &b, const ae_int_t n, complex_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular complex convolution. For given S/R returns conv(S,R) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: normal convolution is commutative, i.e. it is symmetric - conv(A,B)=conv(B,A). Cyclic convolution IS NOT. One function - S - is a signal, periodic function, and another - R - is a response, non-periodic function with limited length. INPUT PARAMETERS S - array[M] - complex periodic signal M - problem size B - array[N] - complex non-periodic response N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[M]. NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. NOTE: there is a buffered version of this function, ConvC1DCircularBuf(), which can reuse space previously allocated in its output parameter R. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void convc1dcircular(const complex_1d_array &s, const ae_int_t m, const complex_1d_array &r, const ae_int_t n, complex_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular complex convolution. Buffered version of ConvC1DCircular(), which does not reallocate C[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 30.11.2023 by Bochkanov Sergey *************************************************************************/
void convc1dcircularbuf(const complex_1d_array &s, const ae_int_t m, const complex_1d_array &r, const ae_int_t n, complex_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular complex deconvolution (inverse of ConvC1DCircular()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved periodic signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - non-periodic response N - response length OUTPUT PARAMETERS R - deconvolved signal. array[0..M-1]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. NOTE: there is a buffered version of this function, ConvC1DCircularInvBuf(), which can reuse space previously allocated in its output parameter R. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void convc1dcircularinv(const complex_1d_array &a, const ae_int_t m, const complex_1d_array &b, const ae_int_t n, complex_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular complex deconvolution (inverse of ConvC1DCircular()). Buffered version of ConvC1DCircularInv(), which does not reallocate R[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 30.11.2023 by Bochkanov Sergey *************************************************************************/
void convc1dcircularinvbuf(const complex_1d_array &a, const ae_int_t m, const complex_1d_array &b, const ae_int_t n, complex_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional complex non-circular deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length, N<=M OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. NOTE: there is a buffered version of this function, ConvC1DInvBuf(), which can reuse space previously allocated in its output parameter R -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void convc1dinv(const complex_1d_array &a, const ae_int_t m, const complex_1d_array &b, const ae_int_t n, complex_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional complex non-circular deconvolution (inverse of ConvC1D()). A buffered version, which does not reallocate R[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 30.11.2023 by Bochkanov Sergey *************************************************************************/
void convc1dinvbuf(const complex_1d_array &a, const ae_int_t m, const complex_1d_array &b, const ae_int_t n, complex_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional real convolution. Analogous to ConvC1D(), see ConvC1D() comments for more details. INPUT PARAMETERS A - array[0..M-1] - real function to be transformed M - problem size B - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..N+M-2]. NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. NOTE: there is a buffered version of this function, ConvR1DBuf(), which can reuse space previously allocated in its output parameter R. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void convr1d(const real_1d_array &a, const ae_int_t m, const real_1d_array &b, const ae_int_t n, real_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional real convolution. Buffered version of ConvR1D(), which does not reallocate R[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 30.11.2023 by Bochkanov Sergey *************************************************************************/
void convr1dbuf(const real_1d_array &a, const ae_int_t m, const real_1d_array &b, const ae_int_t n, real_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular real convolution. Analogous to ConvC1DCircular(), see ConvC1DCircular() comments for more details. INPUT PARAMETERS S - array[0..M-1] - real signal M - problem size B - array[0..N-1] - real response N - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. NOTE: there is a buffered version of this function, ConvR1DCurcularBuf(), which can reuse space previously allocated in its output parameter R. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void convr1dcircular(const real_1d_array &s, const ae_int_t m, const real_1d_array &r, const ae_int_t n, real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular real convolution, buffered version, which does not reallocate C[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 30.11.2023 by Bochkanov Sergey *************************************************************************/
void convr1dcircularbuf(const real_1d_array &s, const ae_int_t m, const real_1d_array &r, const ae_int_t n, real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional complex deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that B is zero at T<0. If it has non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void convr1dcircularinv(const real_1d_array &a, const ae_int_t m, const real_1d_array &b, const ae_int_t n, real_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional complex deconvolution, inverse of ConvR1DCircular(). Buffered version, which does not reallocate R[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void convr1dcircularinvbuf(const real_1d_array &a, const ae_int_t m, const real_1d_array &b, const ae_int_t n, real_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional real deconvolution (inverse of ConvC1D()). Algorithm has M*log(M)) complexity for any M (composite or prime). INPUT PARAMETERS A - array[0..M-1] - convolved signal, A = conv(R, B) M - convolved signal length B - array[0..N-1] - response N - response length, N<=M OUTPUT PARAMETERS R - deconvolved signal. array[0..M-N]. NOTE: deconvolution is unstable process and may result in division by zero (if your response function is degenerate, i.e. has zero Fourier coefficient). NOTE: It is assumed that A is zero at T<0, B is zero too. If one or both functions have non-zero values at negative T's, you can still use this subroutine - just shift its result correspondingly. NOTE: there is a buffered version of this function, ConvR1DInvBuf(), which can reuse space previously allocated in its output parameter R. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void convr1dinv(const real_1d_array &a, const ae_int_t m, const real_1d_array &b, const ae_int_t n, real_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional real deconvolution (inverse of ConvR1D()), buffered version, which does not reallocate R[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 30.11.2023 by Bochkanov Sergey *************************************************************************/
void convr1dinvbuf(const real_1d_array &a, const ae_int_t m, const real_1d_array &b, const ae_int_t n, real_1d_array &r, const xparams _xparams = alglib::xdefault);
corrc1d
corrc1dbuf
corrc1dcircular
corrc1dcircularbuf
corrr1d
corrr1dbuf
corrr1dcircular
corrr1dcircularbuf
/************************************************************************* 1-dimensional complex cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (non-circular). Correlation is calculated using reduction to convolution. Algorithm with max(N,N)*log(max(N,N)) complexity is used (see ConvC1D() for more info about performance). IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrC1D(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - complex function to be transformed, signal containing pattern N - problem size Pattern - array[0..M-1] - complex function to be transformed, pattern to 'search' within a signal M - problem size OUTPUT PARAMETERS R - cross-correlation, array[0..N+M-2]: * positive lags are stored in R[0..N-1], R[i] = sum(conj(pattern[j])*signal[i+j] * negative lags are stored in R[N..N+M-2], R[N+M-1-i] = sum(conj(pattern[j])*signal[-i+j] NOTE: It is assumed that pattern domain is [0..M-1]. If Pattern is non-zero on [-K..M-1], you can still use this subroutine, just shift result by K. NOTE: there is a buffered version of this function, CorrC1DBuf(), which can reuse space previously allocated in its output parameter R. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void corrc1d(const complex_1d_array &signal, const ae_int_t n, const complex_1d_array &pattern, const ae_int_t m, complex_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional complex cross-correlation, a buffered version of CorrC1D() which does not reallocate R[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void corrc1dbuf(const complex_1d_array &signal, const ae_int_t n, const complex_1d_array &pattern, const ae_int_t m, complex_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular complex cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrC1DCircular(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - complex function to be transformed, periodic signal containing pattern N - problem size Pattern - array[0..M-1] - complex function to be transformed, non-periodic pattern to 'search' within a signal M - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. NOTE: there is a buffered version of this function, CorrC1DCircular(), which can reuse space previously allocated in its output parameter R. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void corrc1dcircular(const complex_1d_array &signal, const ae_int_t m, const complex_1d_array &pattern, const ae_int_t n, complex_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular complex cross-correlation. A buffered function which does not reallocate C[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void corrc1dcircularbuf(const complex_1d_array &signal, const ae_int_t m, const complex_1d_array &pattern, const ae_int_t n, complex_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional real cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (non-circular). Correlation is calculated using reduction to convolution. Algorithm with max(N,N)*log(max(N,N)) complexity is used (see ConvC1D() for more info about performance). IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrR1D(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - real function to be transformed, signal containing pattern N - problem size Pattern - array[0..M-1] - real function to be transformed, pattern to 'search' withing signal M - problem size OUTPUT PARAMETERS R - cross-correlation, array[0..N+M-2]: * positive lags are stored in R[0..N-1], R[i] = sum(pattern[j]*signal[i+j] * negative lags are stored in R[N..N+M-2], R[N+M-1-i] = sum(pattern[j]*signal[-i+j] NOTE: It is assumed that pattern domain is [0..M-1]. If Pattern is non-zero on [-K..M-1], you can still use this subroutine, just shift result by K. NOTE: there is a buffered version of this function, CorrR1DBuf(), which can reuse space previously allocated in its output parameter R. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void corrr1d(const real_1d_array &signal, const ae_int_t n, const real_1d_array &pattern, const ae_int_t m, real_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional real cross-correlation, buffered function, which does not reallocate R[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void corrr1dbuf(const real_1d_array &signal, const ae_int_t n, const real_1d_array &pattern, const ae_int_t m, real_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular real cross-correlation. For given Pattern/Signal returns corr(Pattern,Signal) (circular). Algorithm has linearithmic complexity for any M/N. IMPORTANT: for historical reasons subroutine accepts its parameters in reversed order: CorrR1DCircular(Signal, Pattern) = Pattern x Signal (using traditional definition of cross-correlation, denoting cross-correlation as "x"). INPUT PARAMETERS Signal - array[0..N-1] - real function to be transformed, periodic signal containing pattern N - problem size Pattern - array[0..M-1] - real function to be transformed, non-periodic pattern to search withing signal M - problem size OUTPUT PARAMETERS R - convolution: A*B. array[0..M-1]. NOTE: there is a buffered version of this function, CorrR1DCircularBuf(), which can reuse space previously allocated in its output parameter C. -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void corrr1dcircular(const real_1d_array &signal, const ae_int_t m, const real_1d_array &pattern, const ae_int_t n, real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional circular real cross-correlation, buffered version , which does not reallocate C[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 21.07.2009 by Bochkanov Sergey *************************************************************************/
void corrr1dcircularbuf(const real_1d_array &signal, const ae_int_t m, const real_1d_array &pattern, const ae_int_t n, real_1d_array &c, const xparams _xparams = alglib::xdefault);
pearsoncorrelationsignificance
spearmanrankcorrelationsignificance
/************************************************************************* Pearson's correlation coefficient significance test This test checks hypotheses about whether X and Y are samples of two continuous distributions having zero correlation or whether their correlation is non-zero. The following tests are performed: * two-tailed test (null hypothesis - X and Y have zero correlation) * left-tailed test (null hypothesis - the correlation coefficient is greater than or equal to 0) * right-tailed test (null hypothesis - the correlation coefficient is less than or equal to 0). Requirements: * the number of elements in each sample is not less than 5 * normality of distributions of X and Y. Input parameters: R - Pearson's correlation coefficient for X and Y N - number of elements in samples, N>=5. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
void pearsoncorrelationsignificance(const double r, const ae_int_t n, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
/************************************************************************* Spearman's rank correlation coefficient significance test This test checks hypotheses about whether X and Y are samples of two continuous distributions having zero correlation or whether their correlation is non-zero. The following tests are performed: * two-tailed test (null hypothesis - X and Y have zero correlation) * left-tailed test (null hypothesis - the correlation coefficient is greater than or equal to 0) * right-tailed test (null hypothesis - the correlation coefficient is less than or equal to 0). Requirements: * the number of elements in each sample is not less than 5. The test is non-parametric and doesn't require distributions X and Y to be normal. Input parameters: R - Spearman's rank correlation coefficient for X and Y N - number of elements in samples, N>=5. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
void spearmanrankcorrelationsignificance(const double r, const ae_int_t n, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
kmeansgenerate
/************************************************************************* k-means++ clusterization. Backward compatibility function, we recommend to use CLUSTERING subpackage as better replacement. -- ALGLIB -- Copyright 21.03.2009 by Bochkanov Sergey *************************************************************************/
void kmeansgenerate(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t k, const ae_int_t restarts, ae_int_t &info, real_2d_array &c, integer_1d_array &xyc, const xparams _xparams = alglib::xdefault);
dawsonintegral
/************************************************************************* Dawson's Integral Approximates the integral x - 2 | | 2 dawsn(x) = exp( -x ) | exp( t ) dt | | - 0 Three different rational approximations are employed, for the intervals 0 to 3.25; 3.25 to 6.25; and 6.25 up. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,10 10000 6.9e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
double dawsonintegral(const double x, const xparams _xparams = alglib::xdefault);
decisionforest
decisionforestbuffer
decisionforestbuilder
dfreport
dfavgce
dfavgerror
dfavgrelerror
dfbinarycompression
dfbuilderbuildrandomforest
dfbuildercreate
dfbuildergetprogress
dfbuilderpeekprogress
dfbuildersetdataset
dfbuildersetimportancenone
dfbuildersetimportanceoobgini
dfbuildersetimportancepermutation
dfbuildersetimportancetrngini
dfbuildersetrdfalgo
dfbuildersetrdfsplitstrength
dfbuildersetrndvars
dfbuildersetrndvarsauto
dfbuildersetrndvarsratio
dfbuildersetseed
dfbuildersetsubsampleratio
dfbuildrandomdecisionforest
dfbuildrandomdecisionforestx1
dfclassify
dfcreatebuffer
dfprocess
dfprocess0
dfprocessi
dfrelclserror
dfrmserror
dfserialize
dftsprocess
dfunserialize
randomforest_cls Simple classification with random forests
randomforest_reg Simple regression with decision forest
/************************************************************************* Decision forest (random forest) model. *************************************************************************/
class decisionforest { public: decisionforest(); decisionforest(const decisionforest &rhs); decisionforest& operator=(const decisionforest &rhs); virtual ~decisionforest(); };
/************************************************************************* Buffer object which is used to perform various requests (usually model inference) in the multithreaded mode (multiple threads working with same DF object). This object should be created with DFCreateBuffer(). *************************************************************************/
class decisionforestbuffer { public: decisionforestbuffer(); decisionforestbuffer(const decisionforestbuffer &rhs); decisionforestbuffer& operator=(const decisionforestbuffer &rhs); virtual ~decisionforestbuffer(); };
/************************************************************************* A random forest (decision forest) builder object. Used to store dataset and specify decision forest training algorithm settings. *************************************************************************/
class decisionforestbuilder { public: decisionforestbuilder(); decisionforestbuilder(const decisionforestbuilder &rhs); decisionforestbuilder& operator=(const decisionforestbuilder &rhs); virtual ~decisionforestbuilder(); };
/************************************************************************* Decision forest training report. === training/oob errors ================================================== Following fields store training set errors: * relclserror - fraction of misclassified cases, [0,1] * avgce - average cross-entropy in bits per symbol * rmserror - root-mean-square error * avgerror - average error * avgrelerror - average relative error Out-of-bag estimates are stored in fields with same names, but "oob" prefix. For classification problems: * RMS, AVG and AVGREL errors are calculated for posterior probabilities For regression problems: * RELCLS and AVGCE errors are zero === variable importance ================================================== Following fields are used to store variable importance information: * topvars - variables ordered from the most important to less important ones (according to current choice of importance raiting). For example, topvars[0] contains index of the most important variable, and topvars[0:2] are indexes of 3 most important ones and so on. * varimportances - array[nvars], ratings (the larger, the more important the variable is, always in [0,1] range). By default, filled by zeros (no importance ratings are provided unless you explicitly request them). Zero rating means that variable is not important, however you will rarely encounter such a thing, in many cases unimportant variables produce nearly-zero (but nonzero) ratings. Variable importance report must be EXPLICITLY requested by calling: * dfbuildersetimportancegini() function, if you need out-of-bag Gini-based importance rating also known as MDI (fast to calculate, resistant to overfitting issues, but has some bias towards continuous and high-cardinality categorical variables) * dfbuildersetimportancetrngini() function, if you need training set Gini- -based importance rating (what other packages typically report). * dfbuildersetimportancepermutation() function, if you need permutation- based importance rating also known as MDA (slower to calculate, but less biased) * dfbuildersetimportancenone() function, if you do not need importance ratings - ratings will be zero, topvars[] will be [0,1,2,...] Different importance ratings (Gini or permutation) produce non-comparable values. Although in all cases rating values lie in [0,1] range, there are exist differences: * informally speaking, Gini importance rating tends to divide "unit amount of importance" between several important variables, i.e. it produces estimates which roughly sum to 1.0 (or less than 1.0, if your task can not be solved exactly). If all variables are equally important, they will have same rating, roughly 1/NVars, even if every variable is critically important. * from the other side, permutation importance tells us what percentage of the model predictive power will be ruined by permuting this specific variable. It does not produce estimates which sum to one. Critically important variable will have rating close to 1.0, and you may have multiple variables with such a rating. More information on variable importance ratings can be found in comments on the dfbuildersetimportancegini() and dfbuildersetimportancepermutation() functions. *************************************************************************/
class dfreport { public: dfreport(); dfreport(const dfreport &rhs); dfreport& operator=(const dfreport &rhs); virtual ~dfreport(); double relclserror; double avgce; double rmserror; double avgerror; double avgrelerror; double oobrelclserror; double oobavgce; double oobrmserror; double oobavgerror; double oobavgrelerror; integer_1d_array topvars; real_1d_array varimportances; };
/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*LN(2)). Zero if model solves regression task. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
double dfavgce(const decisionforest &df, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average error when estimating posterior probabilities. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
double dfavgerror(const decisionforest &df, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average relative error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task, it means average relative error when estimating posterior probability of belonging to the correct class. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
double dfavgrelerror(const decisionforest &df, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs binary compression of the decision forest. Original decision forest produced by the forest builder is stored using 64-bit representation for all numbers - offsets, variable indexes, split points. It is possible to significantly reduce model size by means of: * using compressed dynamic encoding for integers (offsets and variable indexes), which uses just 1 byte to store small ints (less than 128), just 2 bytes for larger values (less than 128^2) and so on * storing floating point numbers using 8-bit exponent and 16-bit mantissa As result, model needs significantly less memory (compression factor depends on variable and class counts). In particular: * NVars<128 and NClasses<128 result in 4.4x-5.7x model size reduction * NVars<16384 and NClasses<128 result in 3.7x-4.5x model size reduction Such storage format performs lossless compression of all integers, but compression of floating point values (split values) is lossy, with roughly 0.01% relative error introduced during rounding. Thus, we recommend you to re-evaluate model accuracy after compression. Another downside of compression is ~1.5x reduction in the inference speed due to necessity of dynamic decompression of the compressed model. INPUT PARAMETERS: DF - decision forest built by forest builder OUTPUT PARAMETERS: DF - replaced by compressed forest RESULT: compression factor (in-RAM size of the compressed model vs than of the uncompressed one), positive number larger than 1.0 -- ALGLIB -- Copyright 22.07.2019 by Bochkanov Sergey *************************************************************************/
double dfbinarycompression(decisionforest &df, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds decision forest according to current settings using dataset internally stored in the builder object. Dense algorithm is used. NOTE: this function uses dense algorithm for forest construction independently from the dataset format (dense or sparse). NOTE: forest built with this function is stored in-memory using 64-bit data structures for offsets/indexes/split values. It is possible to convert forest into more memory-efficient compressed binary representation. Depending on the problem properties, 3.7x-5.7x compression factors are possible. The downsides of compression are (a) slight reduction in the model accuracy and (b) ~1.5x reduction in the inference speed (due to increased complexity of the storage format). See comments on dfbinarycompression() for more info. Default settings are used by the algorithm; you can tweak them with the help of the following functions: * dfbuildersetrfactor() - to control a fraction of the dataset used for subsampling * dfbuildersetrandomvars() - to control number of variables randomly chosen for decision rule creation ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: S - decision forest builder object NTrees - NTrees>=1, number of trees to train OUTPUT PARAMETERS: DF - decision forest. You can compress this forest to more compact 16-bit representation with dfbinarycompression() Rep - report, see below for information on its fields. === report information produced by forest construction function ========== Decision forest training report includes following information: * training set errors * out-of-bag estimates of errors * variable importance ratings Following fields are used to store information: * training set errors are stored in rep.relclserror, rep.avgce, rep.rmserror, rep.avgerror and rep.avgrelerror * out-of-bag estimates of errors are stored in rep.oobrelclserror, rep.oobavgce, rep.oobrmserror, rep.oobavgerror and rep.oobavgrelerror Variable importance reports, if requested by dfbuildersetimportancegini(), dfbuildersetimportancetrngini() or dfbuildersetimportancepermutation() call, are stored in: * rep.varimportances field stores importance ratings * rep.topvars stores variable indexes ordered from the most important to less important ones You can find more information about report fields in: * comments on dfreport structure * comments on dfbuildersetimportancegini function * comments on dfbuildersetimportancetrngini function * comments on dfbuildersetimportancepermutation function -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuilderbuildrandomforest(decisionforestbuilder &s, const ae_int_t ntrees, decisionforest &df, dfreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine creates DecisionForestBuilder object which is used to train decision forests. By default, new builder stores empty dataset and some reasonable default settings. At the very least, you should specify dataset prior to building decision forest. You can also tweak settings of the forest construction algorithm (recommended, although default setting should work well). Following actions are mandatory: * calling dfbuildersetdataset() to specify dataset * calling dfbuilderbuildrandomforest() to build decision forest using current dataset and default settings Additionally, you may call: * dfbuildersetrndvars() or dfbuildersetrndvarsratio() to specify number of variables randomly chosen for each split * dfbuildersetsubsampleratio() to specify fraction of the dataset randomly subsampled to build each tree * dfbuildersetseed() to control random seed chosen for tree construction INPUT PARAMETERS: none OUTPUT PARAMETERS: S - decision forest builder -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuildercreate(decisionforestbuilder &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function is an alias for dfbuilderpeekprogress(), left in ALGLIB for backward compatibility reasons. -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
double dfbuildergetprogress(const decisionforestbuilder &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function is used to peek into decision forest construction process from some other thread and get current progress indicator. It returns value in [0,1]. INPUT PARAMETERS: S - decision forest builder object used to build forest in some other thread RESULT: progress value, in [0,1] -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
double dfbuilderpeekprogress(const decisionforestbuilder &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine adds dense dataset to the internal storage of the builder object. Specifying your dataset in the dense format means that the dense version of the forest construction algorithm will be invoked. INPUT PARAMETERS: S - decision forest builder object XY - array[NPoints,NVars+1] (minimum size; actual size can be larger, only leading part is used anyway), dataset: * first NVars elements of each row store values of the independent variables * last column store class number (in 0...NClasses-1) or real value of the dependent variable NPoints - number of rows in the dataset, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - indicates type of the problem being solved: * NClasses>=2 means that classification problem is solved (last column of the dataset stores class number) * NClasses=1 means that regression problem is solved (last column of the dataset stores variable value) OUTPUT PARAMETERS: S - decision forest builder -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuildersetdataset(decisionforestbuilder &s, const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nclasses, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function tells decision forest construction algorithm to skip variable importance estimation. INPUT PARAMETERS: S - decision forest builder object OUTPUT PARAMETERS: S - decision forest builder object. Next call to the forest construction function will result in forest being built without variable importance estimation. -- ALGLIB -- Copyright 29.07.2019 by Bochkanov Sergey *************************************************************************/
void dfbuildersetimportancenone(decisionforestbuilder &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells decision forest construction algorithm to use out-of-bag version of Gini variable importance estimation (also known as OOB-MDI). This version of importance estimation algorithm analyzes mean decrease in impurity (MDI) on out-of-bag sample during splits. The result is divided by impurity at the root node in order to produce estimate in [0,1] range. Such estimates are fast to calculate and resistant to overfitting issues (thanks to the out-of-bag estimates used). However, OOB Gini rating has following downsides: * there exist some bias towards continuous and high-cardinality categorical variables * Gini rating allows us to order variables by importance, but it is hard to define importance of the variable by itself. NOTE: informally speaking, MDA (permutation importance) rating answers the question "what part of the model predictive power is ruined by permuting k-th variable?" while MDI tells us "what part of the model predictive power was achieved due to usage of k-th variable". Thus, MDA rates each variable independently at "0 to 1" scale while MDI (and OOB-MDI too) tends to divide "unit amount of importance" between several important variables. If all variables are equally important, they will have same MDI/OOB-MDI rating, equal (for OOB-MDI: roughly equal) to 1/NVars. However, roughly same picture will be produced for the "all variables provide information no one is critical" situation and for the "all variables are critical, drop any one, everything is ruined" situation. Contrary to that, MDA will rate critical variable as ~1.0 important, and important but non-critical variable will have less than unit rating. NOTE: quite an often MDA and MDI return same results. It generally happens on problems with low test set error (a few percents at most) and large enough training set to avoid overfitting. The difference between MDA, MDI and OOB-MDI becomes important only on "hard" tasks with high test set error and/or small training set. INPUT PARAMETERS: S - decision forest builder object OUTPUT PARAMETERS: S - decision forest builder object. Next call to the forest construction function will produce: * importance estimates in rep.varimportances field * variable ranks in rep.topvars field -- ALGLIB -- Copyright 29.07.2019 by Bochkanov Sergey *************************************************************************/
void dfbuildersetimportanceoobgini(decisionforestbuilder &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells decision forest construction algorithm to use permutation variable importance estimator (also known as MDA). This version of importance estimation algorithm analyzes mean increase in out-of-bag sum of squared residuals after random permutation of J-th variable. The result is divided by error computed with all variables being perturbed in order to produce R-squared-like estimate in [0,1] range. Such estimate is slower to calculate than Gini-based rating because it needs multiple inference runs for each of variables being studied. ALGLIB uses parallelized and highly optimized algorithm which analyzes path through the decision tree and allows to handle most perturbations in O(1) time; nevertheless, requesting MDA importances may increase forest construction time from 10% to 200% (or more, if you have thousands of variables). However, MDA rating has following benefits over Gini-based ones: * no bias towards specific variable types * ability to directly evaluate "absolute" importance of some variable at "0 to 1" scale (contrary to Gini-based rating, which returns comparative importances). NOTE: informally speaking, MDA (permutation importance) rating answers the question "what part of the model predictive power is ruined by permuting k-th variable?" while MDI tells us "what part of the model predictive power was achieved due to usage of k-th variable". Thus, MDA rates each variable independently at "0 to 1" scale while MDI (and OOB-MDI too) tends to divide "unit amount of importance" between several important variables. If all variables are equally important, they will have same MDI/OOB-MDI rating, equal (for OOB-MDI: roughly equal) to 1/NVars. However, roughly same picture will be produced for the "all variables provide information no one is critical" situation and for the "all variables are critical, drop any one, everything is ruined" situation. Contrary to that, MDA will rate critical variable as ~1.0 important, and important but non-critical variable will have less than unit rating. NOTE: quite an often MDA and MDI return same results. It generally happens on problems with low test set error (a few percents at most) and large enough training set to avoid overfitting. The difference between MDA, MDI and OOB-MDI becomes important only on "hard" tasks with high test set error and/or small training set. INPUT PARAMETERS: S - decision forest builder object OUTPUT PARAMETERS: S - decision forest builder object. Next call to the forest construction function will produce: * importance estimates in rep.varimportances field * variable ranks in rep.topvars field -- ALGLIB -- Copyright 29.07.2019 by Bochkanov Sergey *************************************************************************/
void dfbuildersetimportancepermutation(decisionforestbuilder &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells decision forest construction algorithm to use Gini impurity based variable importance estimation (also known as MDI). This version of importance estimation algorithm analyzes mean decrease in impurity (MDI) on training sample during splits. The result is divided by impurity at the root node in order to produce estimate in [0,1] range. Such estimates are fast to calculate and beautifully normalized (sum to one) but have following downsides: * They ALWAYS sum to 1.0, even if output is completely unpredictable. I.e. MDI allows to order variables by importance, but does not tell us about "absolute" importances of variables * there exist some bias towards continuous and high-cardinality categorical variables NOTE: informally speaking, MDA (permutation importance) rating answers the question "what part of the model predictive power is ruined by permuting k-th variable?" while MDI tells us "what part of the model predictive power was achieved due to usage of k-th variable". Thus, MDA rates each variable independently at "0 to 1" scale while MDI (and OOB-MDI too) tends to divide "unit amount of importance" between several important variables. If all variables are equally important, they will have same MDI/OOB-MDI rating, equal (for OOB-MDI: roughly equal) to 1/NVars. However, roughly same picture will be produced for the "all variables provide information no one is critical" situation and for the "all variables are critical, drop any one, everything is ruined" situation. Contrary to that, MDA will rate critical variable as ~1.0 important, and important but non-critical variable will have less than unit rating. NOTE: quite an often MDA and MDI return same results. It generally happens on problems with low test set error (a few percents at most) and large enough training set to avoid overfitting. The difference between MDA, MDI and OOB-MDI becomes important only on "hard" tasks with high test set error and/or small training set. INPUT PARAMETERS: S - decision forest builder object OUTPUT PARAMETERS: S - decision forest builder object. Next call to the forest construction function will produce: * importance estimates in rep.varimportances field * variable ranks in rep.topvars field -- ALGLIB -- Copyright 29.07.2019 by Bochkanov Sergey *************************************************************************/
void dfbuildersetimportancetrngini(decisionforestbuilder &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets random decision forest construction algorithm. As for now, only one decision forest construction algorithm is supported - a dense "baseline" RDF algorithm. INPUT PARAMETERS: S - decision forest builder object AlgoType - algorithm type: * 0 = baseline dense RDF OUTPUT PARAMETERS: S - decision forest builder, see -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuildersetrdfalgo(decisionforestbuilder &s, const ae_int_t algotype, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets split selection algorithm used by decision forest classifier. You may choose several algorithms, with different speed and quality of the results. INPUT PARAMETERS: S - decision forest builder object SplitStrength- split type: * 0 = split at the random position, fastest one * 1 = split at the middle of the range * 2 = strong split at the best point of the range (default) OUTPUT PARAMETERS: S - decision forest builder, see -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuildersetrdfsplitstrength(decisionforestbuilder &s, const ae_int_t splitstrength, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets number of variables (in [1,NVars] range) used by decision forest construction algorithm. The default option is to use roughly sqrt(NVars) variables. INPUT PARAMETERS: S - decision forest builder object RndVars - number of randomly selected variables; values outside of [1,NVars] range are silently clipped. OUTPUT PARAMETERS: S - decision forest builder -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuildersetrndvars(decisionforestbuilder &s, const ae_int_t rndvars, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function tells decision forest builder to automatically choose number of variables used by decision forest construction algorithm. Roughly sqrt(NVars) variables will be used. INPUT PARAMETERS: S - decision forest builder object OUTPUT PARAMETERS: S - decision forest builder -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuildersetrndvarsauto(decisionforestbuilder &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets number of variables used by decision forest construction algorithm as a fraction of total variable count (0,1) range. The default option is to use roughly sqrt(NVars) variables. INPUT PARAMETERS: S - decision forest builder object F - round(NVars*F) variables are selected OUTPUT PARAMETERS: S - decision forest builder -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuildersetrndvarsratio(decisionforestbuilder &s, const double f, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets seed used by internal RNG for random subsampling and random selection of variable subsets. By default random seed is used, i.e. every time you build decision forest, we seed generator with new value obtained from system-wide RNG. Thus, decision forest builder returns non-deterministic results. You can change such behavior by specyfing fixed positive seed value. INPUT PARAMETERS: S - decision forest builder object SeedVal - seed value: * positive values are used for seeding RNG with fixed seed, i.e. subsequent runs on same data will return same decision forests * non-positive seed means that random seed is used for every run of builder, i.e. subsequent runs on same datasets will return slightly different decision forests OUTPUT PARAMETERS: S - decision forest builder, see -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuildersetseed(decisionforestbuilder &s, const ae_int_t seedval, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets size of dataset subsample generated the decision forest construction algorithm. Size is specified as a fraction of total dataset size. The default option is to use 50% of the dataset for training, 50% for the OOB estimates. You can decrease fraction F down to 10%, 1% or even below in order to reduce overfitting. INPUT PARAMETERS: S - decision forest builder object F - fraction of the dataset to use, in (0,1] range. Values outside of this range will be silently clipped. At least one element is always selected for the training set. OUTPUT PARAMETERS: S - decision forest builder -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
void dfbuildersetsubsampleratio(decisionforestbuilder &s, const double f, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine builds random decision forest. --------- DEPRECATED VERSION! USE DECISION FOREST BUILDER OBJECT --------- -- ALGLIB -- Copyright 19.02.2009 by Bochkanov Sergey *************************************************************************/
void dfbuildrandomdecisionforest(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nclasses, const ae_int_t ntrees, const double r, ae_int_t &info, decisionforest &df, dfreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds random decision forest. --------- DEPRECATED VERSION! USE DECISION FOREST BUILDER OBJECT --------- -- ALGLIB -- Copyright 19.02.2009 by Bochkanov Sergey *************************************************************************/
void dfbuildrandomdecisionforestx1(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nclasses, const ae_int_t ntrees, const ae_int_t nrndvars, const double r, ae_int_t &info, decisionforest &df, dfreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns most probable class number for an input X. It is same as calling dfprocess(model,x,y), then determining i=argmax(y[i]) and returning i. A class number in [0,NOut) range in returned for classification problems, -1 is returned when this function is called for regression problems. IMPORTANT: this function is thread-unsafe and modifies internal structures of the model! You can not use same model object for parallel evaluation from several threads. Use dftsprocess() with independent thread-local buffers, if you need thread-safe evaluation. INPUT PARAMETERS: Model - decision forest model X - input vector, array[0..NVars-1]. RESULT: class number, -1 for regression tasks -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
ae_int_t dfclassify(decisionforest &model, const real_1d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function creates buffer structure which can be used to perform parallel inference requests. DF subpackage provides two sets of computing functions - ones which use internal buffer of DF model (these functions are single-threaded because they use same buffer, which can not shared between threads), and ones which use external buffer. This function is used to initialize external buffer. INPUT PARAMETERS Model - DF model which is associated with newly created buffer OUTPUT PARAMETERS Buf - external buffer. IMPORTANT: buffer object should be used only with model which was used to initialize buffer. Any attempt to use buffer with different object is dangerous - you may get integrity check failure (exception) because sizes of internal arrays do not fit to dimensions of the model structure. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void dfcreatebuffer(const decisionforest &model, decisionforestbuffer &buf, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inference using decision forest IMPORTANT: this function is thread-unsafe and may modify internal structures of the model! You can not use same model object for parallel evaluation from several threads. Use dftsprocess() with independent thread-local buffers if you need thread-safe evaluation. INPUT PARAMETERS: DF - decision forest model X - input vector, array[NVars] Y - possibly preallocated buffer, reallocated if too small OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. See also DFProcessI. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
void dfprocess(const decisionforest &df, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function returns first component of the inferred vector (i.e. one with index #0). It is a convenience wrapper for dfprocess() intended for either: * 1-dimensional regression problems * 2-class classification problems In the former case this function returns inference result as scalar, which is definitely more convenient that wrapping it as vector. In the latter case it returns probability of object belonging to class #0. If you call it for anything different from two cases above, it will work as defined, i.e. return y[0], although it is of less use in such cases. IMPORTANT: this function is thread-unsafe and modifies internal structures of the model! You can not use same model object for parallel evaluation from several threads. Use dftsprocess() with independent thread-local buffers, if you need thread-safe evaluation. INPUT PARAMETERS: Model - DF model X - input vector, array[0..NVars-1]. RESULT: Y[0] -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
double dfprocess0(decisionforest &model, const real_1d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* 'interactive' variant of DFProcess for languages like Python which support constructs like "Y = DFProcessI(DF,X)" and interactive mode of interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. IMPORTANT: this function is thread-unsafe and may modify internal structures of the model! You can not use same model object for parallel evaluation from several threads. Use dftsprocess() with independent thread-local buffers if you need thread-safe evaluation. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void dfprocessi(const decisionforest &df, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Zero if model solves regression task. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
double dfrelclserror(const decisionforest &df, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* RMS error on the test set INPUT PARAMETERS: DF - decision forest model XY - test set NPoints - test set size RESULT: root mean square error. Its meaning for regression task is obvious. As for classification task, RMS error means error when estimating posterior probabilities. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
double dfrmserror(const decisionforest &df, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void dfserialize(const decisionforest &obj, std::string &s_out); void dfserialize(const decisionforest &obj, std::ostream &s_out);
/************************************************************************* Inference using decision forest Thread-safe procesing using external buffer for temporaries. This function is thread-safe (i.e . you can use same DF model from multiple threads) as long as you use different buffer objects for different threads. INPUT PARAMETERS: DF - decision forest model Buf - buffer object, must be allocated specifically for this model with dfcreatebuffer(). X - input vector, array[NVars] Y - possibly preallocated buffer, reallocated if too small OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. See also DFProcessI. -- ALGLIB -- Copyright 16.02.2009 by Bochkanov Sergey *************************************************************************/
void dftsprocess(const decisionforest &df, decisionforestbuffer &buf, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void dfunserialize(const std::string &s_in, decisionforest &obj); void dfunserialize(const std::istream &s_in, decisionforest &obj);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // The very simple classification example: classify points (x,y) in 2D space
        // as ones with x>=0 and ones with x<0 (y is ignored, but our classifier
        // has to find out it).
        //
        // First, we have to create decision forest builder object, load dataset and
        // specify training settings. Our dataset is specified as matrix, which has
        // following format:
        //
        //     x0 y0 class0
        //     x1 y1 class1
        //     x2 y2 class2
        //     ....
        //
        // Here xi and yi can be any values (and in fact you can have any number of
        // independent variables), and classi MUST be integer number in [0,NClasses)
        // range. In our example we denote points with x>=0 as class #0, and
        // ones with negative xi as class #1.
        //
        // NOTE: if you want to solve regression problem, specify NClasses=1. In
        //       this case last column of xy can be any numeric value.
        //
        // For the sake of simplicity, our example includes only 4-point dataset.
        // However, random forests are able to cope with extremely large datasets
        // having millions of examples.
        //
        decisionforestbuilder builder;
        ae_int_t nvars = 2;
        ae_int_t nclasses = 2;
        ae_int_t npoints = 4;
        real_2d_array xy = "[[1,1,0],[1,-1,0],[-1,1,1],[-1,-1,1]]";

        dfbuildercreate(builder);
        dfbuildersetdataset(builder, xy, npoints, nvars, nclasses);

        // in our example we train decision forest using full sample - it allows us
        // to get zero classification error. However, in practical applications smaller
        // values are used: 50%, 25%, 5% or even less.
        dfbuildersetsubsampleratio(builder, 1.0);

        // we train random forest with just one tree; again, in real life situations
        // you typically need from 50 to 500 trees.
        ae_int_t ntrees = 1;
        decisionforest forest;
        dfreport rep;
        dfbuilderbuildrandomforest(builder, ntrees, forest, rep);

        // with such settings (100% of the training set is used) you can expect
        // zero classification error. Beautiful results, but remember - in real life
        // you do not need zero TRAINING SET error, you need good generalization.

        printf("%.4f\n", double(rep.relclserror)); // EXPECTED: 0.0000

        // now, let's perform some simple processing with dfprocess()
        real_1d_array x = "[+1,0]";
        real_1d_array y = "[]";
        dfprocess(forest, x, y);
        printf("%s\n", y.tostring(3).c_str()); // EXPECTED: [+1,0]

        // another option is to use dfprocess0() which returns just first component
        // of the output vector y. ideal for regression problems and binary classifiers.
        double y0;
        y0 = dfprocess0(forest, x);
        printf("%.3f\n", double(y0)); // EXPECTED: 1.000

        // finally, you can use dfclassify() which returns most probable class index (i.e. argmax y[i]).
        ae_int_t i;
        i = dfclassify(forest, x);
        printf("%d\n", int(i)); // EXPECTED: 0
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // The very simple regression example: model f(x,y)=x+y
        //
        // First, we have to create DF builder object, load dataset and specify
        // training settings. Our dataset is specified as matrix, which has following
        // format:
        //
        //     x0 y0 f0
        //     x1 y1 f1
        //     x2 y2 f2
        //     ....
        //
        // Here xi and yi can be any values, and fi is a dependent function value.
        //
        // NOTE: you can also solve classification problems with DF models, see
        //       another example for this unit.
        //
        decisionforestbuilder builder;
        ae_int_t nvars = 2;
        ae_int_t nclasses = 1;
        ae_int_t npoints = 4;
        real_2d_array xy = "[[1,1,+2],[1,-1,0],[-1,1,0],[-1,-1,-2]]";

        dfbuildercreate(builder);
        dfbuildersetdataset(builder, xy, npoints, nvars, nclasses);

        // in our example we train decision forest using full sample - it allows us
        // to get zero classification error. However, in practical applications smaller
        // values are used: 50%, 25%, 5% or even less.
        dfbuildersetsubsampleratio(builder, 1.0);

        // we train random forest with just one tree; again, in real life situations
        // you typically need from 50 to 500 trees.
        ae_int_t ntrees = 1;
        decisionforest model;
        dfreport rep;
        dfbuilderbuildrandomforest(builder, ntrees, model, rep);

        // with such settings (full sample is used) you can expect zero RMS error on the
        // training set. Beautiful results, but remember - in real life you do not
        // need zero TRAINING SET error, you need good generalization.

        printf("%.4f\n", double(rep.rmserror)); // EXPECTED: 0.0000

        // now, let's perform some simple processing with dfprocess()
        real_1d_array x = "[+1,+1]";
        real_1d_array y = "[]";
        dfprocess(model, x, y);
        printf("%s\n", y.tostring(3).c_str()); // EXPECTED: [+2]

        // another option is to use dfprocess0() which returns just first component
        // of the output vector y. ideal for regression problems and binary classifiers.
        double y0;
        y0 = dfprocess0(model, x);
        printf("%.3f\n", double(y0)); // EXPECTED: 2.000

        // there also exist another convenience function, dfclassify(),
        // but it does not work for regression problems - it always returns -1.
        ae_int_t i;
        i = dfclassify(model, x);
        printf("%d\n", int(i)); // EXPECTED: -1
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

densesolverlsreport
densesolverreport
cmatrixlusolve
cmatrixlusolvefast
cmatrixlusolvem
cmatrixlusolvemfast
cmatrixmixedsolve
cmatrixmixedsolvem
cmatrixsolve
cmatrixsolvefast
cmatrixsolvem
cmatrixsolvemfast
hpdmatrixcholeskysolve
hpdmatrixcholeskysolvefast
hpdmatrixcholeskysolvem
hpdmatrixcholeskysolvemfast
hpdmatrixsolve
hpdmatrixsolvefast
hpdmatrixsolvem
hpdmatrixsolvemfast
rmatrixlusolve
rmatrixlusolvefast
rmatrixlusolvem
rmatrixlusolvemfast
rmatrixmixedsolve
rmatrixmixedsolvem
rmatrixsolve
rmatrixsolvefast
rmatrixsolvels
rmatrixsolvem
rmatrixsolvemfast
spdmatrixcholeskysolve
spdmatrixcholeskysolvefast
spdmatrixcholeskysolvem
spdmatrixcholeskysolvemfast
spdmatrixsolve
spdmatrixsolvefast
spdmatrixsolvem
spdmatrixsolvemfast
solve_complex Solving dense complex linear equations
solve_complex_m Solving complex matrix equations
solve_hpd Solving Hermitian positive definite linear equations
solve_ls Solving dense linear equations in the least squares sense
solve_real Solving dense linear equations
solve_real_m Solving dense linear matrix equations
solve_spd Solving symmetric positive definite linear equations
/************************************************************************* *************************************************************************/
class densesolverlsreport { public: densesolverlsreport(); densesolverlsreport(const densesolverlsreport &rhs); densesolverlsreport& operator=(const densesolverlsreport &rhs); virtual ~densesolverlsreport(); ae_int_t terminationtype; double r2; real_2d_array cx; ae_int_t n; ae_int_t k; };
/************************************************************************* *************************************************************************/
class densesolverreport { public: densesolverreport(); densesolverreport(const densesolverreport &rhs); densesolverreport& operator=(const densesolverreport &rhs); virtual ~densesolverreport(); ae_int_t terminationtype; double r1; double rinf; };
/************************************************************************* Complex dense linear solver for A*x=b with complex N*N A given by its LU decomposition and N*1 vectors x and b. This is "slow-but-robust" version of the complex linear solver with additional features which add significant performance overhead. Faster version is CMatrixLUSolveFast() function. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use CMatrixSolve or CMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in 10-15x performance penalty when compared ! with "fast" version which just calls triangular solver. ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems. ! ! In such cases we strongly recommend you to use faster solver, ! CMatrixLUSolveFast() function. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or exactly singular matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void cmatrixlusolve(const complex_2d_array &lua, const integer_1d_array &p, const ae_int_t n, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void cmatrixlusolve(const complex_2d_array &lua, const integer_1d_array &p, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Complex dense linear solver for A*x=b with N*N complex A given by its LU decomposition and N*1 vectors x and b. This is fast lightweight version of solver, which is significantly faster than CMatrixLUSolve(), but does not provide additional information (like condition numbers). Algorithm features: * O(N^2) complexity * no additional time-consuming features, just triangular solver INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS B - array[N]: * result=true => overwritten by solution * result=false => filled by zeros NOTE: unlike CMatrixLUSolve(), this function does NOT check for near-degeneracy of input matrix. It checks for EXACT degeneracy, because this check is easy to do. However, very badly conditioned matrices may went unnoticed. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
bool cmatrixlusolvefast(const complex_2d_array &lua, const integer_1d_array &p, const ae_int_t n, complex_1d_array &b, const xparams _xparams = alglib::xdefault); bool cmatrixlusolvefast(const complex_2d_array &lua, const integer_1d_array &p, complex_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*X=B with N*N complex A given by its LU decomposition, and N*M matrices X and B (multiple right sides). "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use CMatrixSolve or CMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. ! ! This performance penalty is especially apparent when you use ! ALGLIB parallel capabilities (condition number estimation is ! inherently sequential). It also becomes significant for ! small-scale problems. ! ! In such cases we strongly recommend you to use faster solver, ! CMatrixLUSolveMFast() function. INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or exactly singular matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void cmatrixlusolvem(const complex_2d_array &lua, const integer_1d_array &p, const ae_int_t n, const complex_2d_array &b, const ae_int_t m, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void cmatrixlusolvem(const complex_2d_array &lua, const integer_1d_array &p, const complex_2d_array &b, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*X=B with N*N complex A given by its LU decomposition, and N*M matrices X and B (multiple right sides). "Fast-but-lightweight" version of the solver. Algorithm features: * O(M*N^2) complexity * no additional time-consuming features INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS B - array[N,M]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
bool cmatrixlusolvemfast(const complex_2d_array &lua, const integer_1d_array &p, const ae_int_t n, complex_2d_array &b, const ae_int_t m, const xparams _xparams = alglib::xdefault); bool cmatrixlusolvemfast(const complex_2d_array &lua, const integer_1d_array &p, complex_2d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* Dense solver. Same as RMatrixMixedSolve(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or exactly singular matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void cmatrixmixedsolve(const complex_2d_array &a, const complex_2d_array &lua, const integer_1d_array &p, const ae_int_t n, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void cmatrixmixedsolve(const complex_2d_array &a, const complex_2d_array &lua, const integer_1d_array &p, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver. Same as RMatrixMixedSolveM(), but for complex matrices. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, CMatrixLU result P - array[0..N-1], pivots array, CMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or exactly singular matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void cmatrixmixedsolvem(const complex_2d_array &a, const complex_2d_array &lua, const integer_1d_array &p, const ae_int_t n, const complex_2d_array &b, const ae_int_t m, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void cmatrixmixedsolvem(const complex_2d_array &a, const complex_2d_array &lua, const integer_1d_array &p, const complex_2d_array &b, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Complex dense solver for A*x=B with N*N complex matrix A and N*1 complex vectors x and b. "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3) complexity IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system ! and performs iterative refinement, which results in ! significant performance penalty when compared with "fast" ! version which just performs LU decomposition and calls ! triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, CMatrixSolveFast() function. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or exactly singular matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void cmatrixsolve(const complex_2d_array &a, const ae_int_t n, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void cmatrixsolve(const complex_2d_array &a, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Complex dense solver for A*x=B with N*N complex matrix A and N*1 complex vectors x and b. "Fast-but-lightweight" version of the solver. Algorithm features: * O(N^3) complexity * no additional time consuming features, just triangular solver INPUT PARAMETERS: A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS: B - array[N]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
bool cmatrixsolvefast(const complex_2d_array &a, const ae_int_t n, complex_1d_array &b, const xparams _xparams = alglib::xdefault); bool cmatrixsolvefast(const complex_2d_array &a, complex_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Complex dense solver for A*X=B with N*N complex matrix A, N*M complex matrices X and B. "Slow-but-feature-rich" version which provides additional functions, at the cost of slower performance. Faster version may be invoked with CMatrixSolveMFast() function. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3+M*N^2) complexity IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system ! and performs iterative refinement, which results in ! significant performance penalty when compared with "fast" ! version which just performs LU decomposition and calls ! triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, CMatrixSolveMFast() function. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or exactly singular matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void cmatrixsolvem(const complex_2d_array &a, const ae_int_t n, const complex_2d_array &b, const ae_int_t m, const bool rfs, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void cmatrixsolvem(const complex_2d_array &a, const complex_2d_array &b, const bool rfs, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Complex dense solver for A*X=B with N*N complex matrix A, N*M complex matrices X and B. "Fast-but-lightweight" version which provides just triangular solver - and no additional functions like iterative refinement or condition number estimation. Algorithm features: * O(N^3+M*N^2) complexity * no additional time consuming functions INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS: B - array[N,M]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 16.03.2015 by Bochkanov Sergey *************************************************************************/
bool cmatrixsolvemfast(const complex_2d_array &a, const ae_int_t n, complex_2d_array &b, const ae_int_t m, const xparams _xparams = alglib::xdefault); bool cmatrixsolvemfast(const complex_2d_array &a, complex_2d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*x=b with N*N Hermitian positive definite matrix A given by its Cholesky decomposition, and N*1 complex vectors x and b. This is "slow-but-feature-rich" version of the solver which estimates condition number of the system. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in 10-15x performance penalty when compared ! with "fast" version which just calls triangular solver. ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems (N<50). ! ! In such cases we strongly recommend you to use faster solver, ! HPDMatrixCholeskySolveFast() function. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[0..N-1], right part OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or indefinite matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void hpdmatrixcholeskysolve(const complex_2d_array &cha, const ae_int_t n, const bool isupper, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void hpdmatrixcholeskysolve(const complex_2d_array &cha, const bool isupper, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*x=b with N*N Hermitian positive definite matrix A given by its Cholesky decomposition, and N*1 complex vectors x and b. This is "fast-but-lightweight" version of the solver. Algorithm features: * O(N^2) complexity * matrix is represented by its upper or lower triangle * no additional time-consuming features INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[0..N-1], right part OUTPUT PARAMETERS B - array[N]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or indefinite system -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/
bool hpdmatrixcholeskysolvefast(const complex_2d_array &cha, const ae_int_t n, const bool isupper, complex_1d_array &b, const xparams _xparams = alglib::xdefault); bool hpdmatrixcholeskysolvefast(const complex_2d_array &cha, const bool isupper, complex_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*X=B with N*N Hermitian positive definite matrix A given by its Cholesky decomposition and N*M complex matrices X and B. This is "slow-but-feature-rich" version of the solver which, in addition to the solution, estimates condition number of the system. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. Amount of overhead introduced depends on M (the ! larger - the more efficient). ! ! This performance penalty is insignificant when compared with ! cost of large Cholesky decomposition. However, if you call ! this function many times for the same left side, this ! overhead BECOMES significant. It also becomes significant ! for small-scale problems (N<50). ! ! In such cases we strongly recommend you to use faster solver, ! HPDMatrixCholeskySolveMFast() function. INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, HPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[N,M], right part M - right part size OUTPUT PARAMETERS: Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or indefinite matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void hpdmatrixcholeskysolvem(const complex_2d_array &cha, const ae_int_t n, const bool isupper, const complex_2d_array &b, const ae_int_t m, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void hpdmatrixcholeskysolvem(const complex_2d_array &cha, const bool isupper, const complex_2d_array &b, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Dense solver for A*X=B with N*N Hermitian positive definite matrix A given by its Cholesky decomposition and N*M complex matrices X and B. This is "fast-but-lightweight" version of the solver. Algorithm features: * O(M*N^2) complexity * matrix is represented by its upper or lower triangle * no additional time-consuming features INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, HPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[N,M], right part M - right part size OUTPUT PARAMETERS: B - array[N]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or indefinite system -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/
bool hpdmatrixcholeskysolvemfast(const complex_2d_array &cha, const ae_int_t n, const bool isupper, complex_2d_array &b, const ae_int_t m, const xparams _xparams = alglib::xdefault); bool hpdmatrixcholeskysolvemfast(const complex_2d_array &cha, const bool isupper, complex_2d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* Dense solver for A*x=b, with N*N Hermitian positive definite matrix A, and N*1 complex vectors x and b. "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just performs Cholesky ! decomposition and calls triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, HPDMatrixSolveFast() function. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Rep - same as in RMatrixSolve X - same as in RMatrixSolve ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void hpdmatrixsolve(const complex_2d_array &a, const ae_int_t n, const bool isupper, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void hpdmatrixsolve(const complex_2d_array &a, const bool isupper, const complex_1d_array &b, complex_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*x=b, with N*N Hermitian positive definite matrix A, and N*1 complex vectors x and b. "Fast-but-lightweight" version of the solver without additional functions. Algorithm features: * O(N^3) complexity * matrix is represented by its upper or lower triangle * no additional time consuming functions INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS B - array[0..N-1]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or indefinite system ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 17.03.2015 by Bochkanov Sergey *************************************************************************/
bool hpdmatrixsolvefast(const complex_2d_array &a, const ae_int_t n, const bool isupper, complex_1d_array &b, const xparams _xparams = alglib::xdefault); bool hpdmatrixsolvefast(const complex_2d_array &a, const bool isupper, complex_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*X=B, with N*N Hermitian positive definite matrix A and N*M complex matrices X and B. "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. ! ! This performance penalty is especially apparent when you use ! ALGLIB parallel capabilities (condition number estimation is ! inherently sequential). It also becomes significant for ! small-scale problems (N<100). ! ! In such cases we strongly recommend you to use faster solver, ! HPDMatrixSolveMFast() function. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Rep - same as in RMatrixSolve X - same as in RMatrixSolve ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void hpdmatrixsolvem(const complex_2d_array &a, const ae_int_t n, const bool isupper, const complex_2d_array &b, const ae_int_t m, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void hpdmatrixsolvem(const complex_2d_array &a, const bool isupper, const complex_2d_array &b, complex_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Dense solver for A*X=B, with N*N Hermitian positive definite matrix A and N*M complex matrices X and B. "Fast-but-lightweight" version of the solver. Algorithm features: * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle * no additional time consuming features like condition number estimation INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS B - array[0..N-1]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or indefinite system ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 17.03.2015 by Bochkanov Sergey *************************************************************************/
bool hpdmatrixsolvemfast(const complex_2d_array &a, const ae_int_t n, const bool isupper, complex_2d_array &b, const ae_int_t m, const xparams _xparams = alglib::xdefault); bool hpdmatrixsolvemfast(const complex_2d_array &a, const bool isupper, complex_2d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where A is NxN non-denegerate real matrix given by its LU decomposition, x and b are real vectors. This is "slow-but-robust" version of the linear LU-based solver. Faster version is RMatrixLUSolveFast() function. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in 10-15x performance penalty when compared ! with "fast" version which just calls triangular solver. ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems. ! ! In such cases we strongly recommend you to use faster solver, ! RMatrixLUSolveFast() function. INPUT PARAMETERS LUA - array[N,N], LU decomposition, RMatrixLU result P - array[N], pivots array, RMatrixLU result N - size of A B - array[N], right part OUTPUT PARAMETERS Rep - additional report, the following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void rmatrixlusolve(const real_2d_array &lua, const integer_1d_array &p, const ae_int_t n, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixlusolve(const real_2d_array &lua, const integer_1d_array &p, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where A is NxN non-denegerate real matrix given by its LU decomposition, x and b are real vectors. This is "fast-without-any-checks" version of the linear LU-based solver. Slower but more robust version is RMatrixLUSolve() function. Algorithm features: * O(N^2) complexity * fast algorithm without ANY additional checks, just triangular solver INPUT PARAMETERS LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS B - array[N]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/
bool rmatrixlusolvefast(const real_2d_array &lua, const integer_1d_array &p, const ae_int_t n, real_1d_array &b, const xparams _xparams = alglib::xdefault); bool rmatrixlusolvefast(const real_2d_array &lua, const integer_1d_array &p, real_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver. Similar to RMatrixLUSolve() but solves task with multiple right parts (where b and x are NxM matrices). This is "robust-but-slow" version of LU-based solver which performs additional checks for non-degeneracy of inputs (condition number estimation). If you need best performance, use "fast-without-any-checks" version, RMatrixLUSolveMFast(). Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation No iterative refinement is provided because exact form of original matrix is not known to subroutine. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. ! ! This performance penalty is especially apparent when you use ! ALGLIB parallel capabilities (condition number estimation is ! inherently sequential). It also becomes significant for ! small-scale problems. ! ! In such cases we strongly recommend you to use faster solver, ! RMatrixLUSolveMFast() function. INPUT PARAMETERS LUA - array[N,N], LU decomposition, RMatrixLU result P - array[N], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void rmatrixlusolvem(const real_2d_array &lua, const integer_1d_array &p, const ae_int_t n, const real_2d_array &b, const ae_int_t m, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixlusolvem(const real_2d_array &lua, const integer_1d_array &p, const real_2d_array &b, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver. Similar to RMatrixLUSolve() but solves task with multiple right parts, where b and x are NxM matrices. This is "fast-without-any-checks" version of LU-based solver. It does not estimate condition number of a system, so it is extremely fast. If you need better detection of near-degenerate cases, use RMatrixLUSolveM() function. Algorithm features: * O(M*N^2) complexity * fast algorithm without ANY additional checks, just triangular solver INPUT PARAMETERS: LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS: B - array[N,M]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/
bool rmatrixlusolvemfast(const real_2d_array &lua, const integer_1d_array &p, const ae_int_t n, real_2d_array &b, const ae_int_t m, const xparams _xparams = alglib::xdefault); bool rmatrixlusolvemfast(const real_2d_array &lua, const integer_1d_array &p, real_2d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where BOTH ORIGINAL A AND ITS LU DECOMPOSITION ARE KNOWN. You can use it if for some reasons you have both A and its LU decomposition. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void rmatrixmixedsolve(const real_2d_array &a, const real_2d_array &lua, const integer_1d_array &p, const ae_int_t n, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixmixedsolve(const real_2d_array &a, const real_2d_array &lua, const integer_1d_array &p, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver. Similar to RMatrixMixedSolve() but solves task with multiple right parts (where b and x are NxM matrices). Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(M*N^2) complexity INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix LUA - array[0..N-1,0..N-1], LU decomposition, RMatrixLU result P - array[0..N-1], pivots array, RMatrixLU result N - size of A B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void rmatrixmixedsolvem(const real_2d_array &a, const real_2d_array &lua, const integer_1d_array &p, const ae_int_t n, const real_2d_array &b, const ae_int_t m, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixmixedsolvem(const real_2d_array &a, const real_2d_array &lua, const integer_1d_array &p, const real_2d_array &b, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*x=b with N*N real matrix A and N*1 real vectorx x and b. This is "slow-but-feature rich" version of the linear solver. Faster version is RMatrixSolveFast() function. Algorithm features: * automatic detection of degenerate cases * condition number estimation * iterative refinement * O(N^3) complexity IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system ! and performs iterative refinement, which results in ! significant performance penalty when compared with "fast" ! version which just performs LU decomposition and calls ! triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. It is also very significant on small matrices. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, RMatrixSolveFast() function. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS Rep - additional report, the following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void rmatrixsolve(const real_2d_array &a, const ae_int_t n, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixsolve(const real_2d_array &a, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver. This subroutine solves a system A*x=b, where A is NxN non-denegerate real matrix, x and b are vectors. This is a "fast" version of linear solver which does NOT provide any additional functions like condition number estimation or iterative refinement. Algorithm features: * efficient algorithm O(N^3) complexity * no performance overhead from additional functionality If you need condition number estimation or iterative refinement, use more feature-rich version - RMatrixSolve(). INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1], right part OUTPUT PARAMETERS B - array[N]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 16.03.2015 by Bochkanov Sergey *************************************************************************/
bool rmatrixsolvefast(const real_2d_array &a, const ae_int_t n, real_1d_array &b, const xparams _xparams = alglib::xdefault); bool rmatrixsolvefast(const real_2d_array &a, real_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver. This subroutine finds solution of the linear system A*X=B with non-square, possibly degenerate A. System is solved in the least squares sense, and general least squares solution X = X0 + CX*y which minimizes |A*X-B| is returned. If A is non-degenerate, solution in the usual sense is returned. Algorithm features: * automatic detection (and correct handling!) of degenerate cases * iterative refinement * O(N^3) complexity INPUT PARAMETERS A - array[0..NRows-1,0..NCols-1], system matrix NRows - vertical size of A NCols - horizontal size of A B - array[0..NCols-1], right part Threshold- a number in [0,1]. Singular values beyond Threshold*Largest are considered zero. Set it to 0.0, if you don't understand what it means, so the solver will choose good value on its own. OUTPUT PARAMETERS Rep - solver report, see below for more info X - array[0..N-1,0..M-1], it contains: * solution of A*X=B (even for singular A) * zeros, if SVD subroutine failed SOLVER REPORT Subroutine sets following fields of the Rep structure: * TerminationType is set to: * -4 for SVD failure * >0 for success * R2 reciprocal of condition number: 1/cond(A), 2-norm. * N = NCols * K dim(Null(A)) * CX array[0..N-1,0..K-1], kernel of A. Columns of CX store such vectors that A*CX[i]=0. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 24.08.2009 by Bochkanov Sergey *************************************************************************/
void rmatrixsolvels(const real_2d_array &a, const ae_int_t nrows, const ae_int_t ncols, const real_1d_array &b, const double threshold, real_1d_array &x, densesolverlsreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixsolvels(const real_2d_array &a, const real_1d_array &b, const double threshold, real_1d_array &x, densesolverlsreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver. Similar to RMatrixSolve() but solves task with multiple right parts (where b and x are NxM matrices). This is "slow-but-robust" version of linear solver with additional functionality like condition number estimation. There also exists faster version - RMatrixSolveMFast(). Algorithm features: * automatic detection of degenerate cases * condition number estimation * optional iterative refinement * O(N^3+M*N^2) complexity IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system ! and performs iterative refinement, which results in ! significant performance penalty when compared with "fast" ! version which just performs LU decomposition and calls ! triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. It also very significant on small matrices. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, RMatrixSolveMFast() function. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or exactly singular matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void rmatrixsolvem(const real_2d_array &a, const ae_int_t n, const real_2d_array &b, const ae_int_t m, const bool rfs, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixsolvem(const real_2d_array &a, const real_2d_array &b, const bool rfs, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver. Similar to RMatrixSolve() but solves task with multiple right parts (where b and x are NxM matrices). This is "fast" version of linear solver which does NOT offer additional functions like condition number estimation or iterative refinement. Algorithm features: * O(N^3+M*N^2) complexity * no additional functionality, highest performance INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A B - array[0..N-1,0..M-1], right part M - right part size RFS - iterative refinement switch: * True - refinement is used. Less performance, more precision. * False - refinement is not used. More performance, less precision. OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm B - array[N]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True for well-conditioned matrix False for extremely badly conditioned or exactly singular problem ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
bool rmatrixsolvemfast(const real_2d_array &a, const ae_int_t n, real_2d_array &b, const ae_int_t m, const xparams _xparams = alglib::xdefault); bool rmatrixsolvemfast(const real_2d_array &a, real_2d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*x=b with N*N symmetric positive definite matrix A given by its Cholesky decomposition, and N*1 real vectors x and b. This is "slow- but-feature-rich" version of the solver which, in addition to the solution, performs condition number estimation. Algorithm features: * automatic detection of degenerate cases * O(N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in 10-15x performance penalty when compared ! with "fast" version which just calls triangular solver. ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems (N<50). ! ! In such cases we strongly recommend you to use faster solver, ! SPDMatrixCholeskySolveFast() function. INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[N], right part OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or indefinite matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N]: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void spdmatrixcholeskysolve(const real_2d_array &cha, const ae_int_t n, const bool isupper, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void spdmatrixcholeskysolve(const real_2d_array &cha, const bool isupper, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*x=b with N*N symmetric positive definite matrix A given by its Cholesky decomposition, and N*1 real vectors x and b. This is "fast- but-lightweight" version of the solver. Algorithm features: * O(N^2) complexity * matrix is represented by its upper or lower triangle * no additional features INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, SPDMatrixCholesky result N - size of A IsUpper - what half of CHA is provided B - array[N], right part OUTPUT PARAMETERS B - array[N]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
bool spdmatrixcholeskysolvefast(const real_2d_array &cha, const ae_int_t n, const bool isupper, real_1d_array &b, const xparams _xparams = alglib::xdefault); bool spdmatrixcholeskysolvefast(const real_2d_array &cha, const bool isupper, real_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*X=B with N*N symmetric positive definite matrix A given by its Cholesky decomposition, and N*M vectors X and B. It is "slow-but- feature-rich" version of the solver which estimates condition number of the system. Algorithm features: * automatic detection of degenerate cases * O(M*N^2) complexity * condition number estimation * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just calls triangular ! solver. Amount of overhead introduced depends on M (the ! larger - the more efficient). ! ! This performance penalty is insignificant when compared with ! cost of large LU decomposition. However, if you call this ! function many times for the same left side, this overhead ! BECOMES significant. It also becomes significant for small- ! scale problems (N<50). ! ! In such cases we strongly recommend you to use faster solver, ! SPDMatrixCholeskySolveMFast() function. INPUT PARAMETERS CHA - array[0..N-1,0..N-1], Cholesky decomposition, SPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or indefinite matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N]: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void spdmatrixcholeskysolvem(const real_2d_array &cha, const ae_int_t n, const bool isupper, const real_2d_array &b, const ae_int_t m, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void spdmatrixcholeskysolvem(const real_2d_array &cha, const bool isupper, const real_2d_array &b, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Dense solver for A*X=B with N*N symmetric positive definite matrix A given by its Cholesky decomposition, and N*M vectors X and B. It is "fast-but- lightweight" version of the solver which just solves linear system, without any additional functions. Algorithm features: * O(M*N^2) complexity * matrix is represented by its upper or lower triangle * no additional functionality INPUT PARAMETERS CHA - array[N,N], Cholesky decomposition, SPDMatrixCholesky result N - size of CHA IsUpper - what half of CHA is provided B - array[N,M], right part M - right part size OUTPUT PARAMETERS B - array[N]: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system -- ALGLIB -- Copyright 18.03.2015 by Bochkanov Sergey *************************************************************************/
bool spdmatrixcholeskysolvemfast(const real_2d_array &cha, const ae_int_t n, const bool isupper, real_2d_array &b, const ae_int_t m, const xparams _xparams = alglib::xdefault); bool spdmatrixcholeskysolvemfast(const real_2d_array &cha, const bool isupper, real_2d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* Dense linear solver for A*x=b with N*N real symmetric positive definite matrix A, N*1 vectors x and b. "Slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just performs Cholesky ! decomposition and calls triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, SPDMatrixSolveFast() function. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or indefinite matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void spdmatrixsolve(const real_2d_array &a, const ae_int_t n, const bool isupper, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void spdmatrixsolve(const real_2d_array &a, const bool isupper, const real_1d_array &b, real_1d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense linear solver for A*x=b with N*N real symmetric positive definite matrix A, N*1 vectors x and b. "Fast-but-lightweight" version of the solver. Algorithm features: * O(N^3) complexity * matrix is represented by its upper or lower triangle * no additional time consuming features like condition number estimation INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1], right part OUTPUT PARAMETERS B - array[N], it contains: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 17.03.2015 by Bochkanov Sergey *************************************************************************/
bool spdmatrixsolvefast(const real_2d_array &a, const ae_int_t n, const bool isupper, real_1d_array &b, const xparams _xparams = alglib::xdefault); bool spdmatrixsolvefast(const real_2d_array &a, const bool isupper, real_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Dense solver for A*X=B with N*N symmetric positive definite matrix A, and N*M vectors X and B. It is "slow-but-feature-rich" version of the solver. Algorithm features: * automatic detection of degenerate cases * condition number estimation * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle No iterative refinement is provided because such partial representation of matrix does not allow efficient calculation of extra-precise matrix-vector products for large matrices. Use RMatrixSolve or RMatrixMixedSolve if you need iterative refinement. IMPORTANT: ! this function is NOT the most efficient linear solver provided ! by ALGLIB. It estimates condition number of linear system, ! which results in significant performance penalty when ! compared with "fast" version which just performs Cholesky ! decomposition and calls triangular solver. ! ! This performance penalty is especially visible in the ! multithreaded mode, because both condition number estimation ! and iterative refinement are inherently sequential ! calculations. ! ! Thus, if you need high performance and if you are pretty sure ! that your system is well conditioned, we strongly recommend ! you to use faster solver, SPDMatrixSolveMFast() function. INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS Rep - additional report, following fields are set: * rep.terminationtype >0 for success -3 for badly conditioned or indefinite matrix * rep.r1 condition number in 1-norm * rep.rinf condition number in inf-norm X - array[N,M], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 27.01.2010 by Bochkanov Sergey *************************************************************************/
void spdmatrixsolvem(const real_2d_array &a, const ae_int_t n, const bool isupper, const real_2d_array &b, const ae_int_t m, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault); void spdmatrixsolvem(const real_2d_array &a, const bool isupper, const real_2d_array &b, real_2d_array &x, densesolverreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Dense solver for A*X=B with N*N symmetric positive definite matrix A, and N*M vectors X and B. It is "fast-but-lightweight" version of the solver. Algorithm features: * O(N^3+M*N^2) complexity * matrix is represented by its upper or lower triangle * no additional time consuming features INPUT PARAMETERS A - array[0..N-1,0..N-1], system matrix N - size of A IsUpper - what half of A is provided B - array[0..N-1,0..M-1], right part M - right part size OUTPUT PARAMETERS B - array[N,M], it contains: * result=true => overwritten by solution * result=false => filled by zeros RETURNS: True, if the system was solved False, for an extremely badly conditioned or exactly singular system ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 17.03.2015 by Bochkanov Sergey *************************************************************************/
bool spdmatrixsolvemfast(const real_2d_array &a, const ae_int_t n, const bool isupper, real_2d_array &b, const ae_int_t m, const xparams _xparams = alglib::xdefault); bool spdmatrixsolvemfast(const real_2d_array &a, const bool isupper, real_2d_array &b, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates solution of a complex linear system
        //
        complex_1d_array x;
        integer_1d_array pivots;
        densesolverreport rep;

        //
        // First, solve A*x=b with a feature-rich cmatrixsolve() which supports iterative improvement
        // and condition number estimation
        //
        complex_2d_array a = "[[-4,2i],[-1i,3]]";
        complex_1d_array b = "[8i,5]";
        cmatrixsolve(a, b, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [-1.0000i, 2.0000]

        //
        // Then, solve C*x=d with cmatrixsolvefast() which has lower overhead
        //
        complex_2d_array c = "[[3i,1],[2i,4]]";
        complex_1d_array d = "[2,-2]";
        cmatrixsolvefast(c, d);
        printf("%s\n", d.tostring(4).c_str()); // EXPECTED: [-1.0000i, -1.0000]

        //
        // Sometimes you have LU decomposition of the system matrix readily
        // available. In such cases it is possible to save a lot of time by
        // passing precomputed LU factors to cmatrixlusolve(). The only
        // downside of such approach is that iterative refinement is unavailable
        // because original (unmodified) form of the system matrix is unknown
        // to ALGLIB.
        //
        // However, if you have BOTH original matrix and its LU decomposition,
        // it is possible to use cmatrixmixedsolve() which accepts both matrix
        // itself and its factors, and uses original matrix to refine solution
        // obtained with LU factors.
        //
        complex_2d_array e = "[[-3,4i],[2i,4]]";
        complex_2d_array lue = "[[-3,4i],[2i,4]]";
        complex_1d_array f = "[2i,0]";
        cmatrixlu(lue, pivots);
        cmatrixlusolve(lue, pivots, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [-2.0000i, -1.0000]

        cmatrixmixedsolve(e, lue, pivots, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [-2.0000i, -1.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates solution of a dense complex matrix system
        //
        complex_2d_array x;
        integer_1d_array pivots;
        densesolverreport rep;

        //
        // First, solve A*X=B with a feature-rich cmatrixsolvem() which supports
        // iterative improvement and condition number estimation. Here A is
        // an N*N matrix, X is an N*M matrix, B is an N*M matrix.
        //
        complex_2d_array a = "[[4i,-2],[-1,3i]]";
        complex_2d_array b = "[[8i,10i,4i],[5,1,-1]]";
        cmatrixsolvem(a, b, true, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [[1.0000, 2.0000,1.0000],[-2.0000i,-1.0000i,0.0000]]

        //
        // Then, solve C*X=D with cmatrixsolvemfast() which has lower overhead
        // due to condition number estimation and iterative refinement parts
        // being dropped.
        //
        complex_2d_array c = "[[3,1],[2,4]]";
        complex_2d_array d = "[[2,1],[-2,4]]";
        cmatrixsolvemfast(c, d);
        printf("%s\n", d.tostring(4).c_str()); // EXPECTED: [[1.0000,0.0000],[-1.0000,1.0000]]

        //
        // Sometimes you have LU decomposition of the system matrix readily
        // available. In such cases it is possible to save a lot of time by
        // passing precomputed LU factors to cmatrixlusolve(). The only
        // downside of such approach is that iterative refinement is unavailable
        // because original (unmodified) form of the system matrix is unknown
        // to ALGLIB.
        //
        // However, if you have BOTH original matrix and its LU decomposition,
        // it is possible to use cmatrixmixedsolve() which accepts both matrix
        // itself and its factors, and uses original matrix to refine solution
        // obtained with LU factors.
        //
        complex_2d_array e = "[[3,4],[2,4]]";
        complex_2d_array lue = "[[3,4],[2,4]]";
        complex_2d_array f = "[[2,5],[0,6]]";
        cmatrixlu(lue, pivots);
        cmatrixlusolvem(lue, pivots, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [[2.0000,-1.0000],[-1.0000,2.0000]]

        cmatrixmixedsolvem(e, lue, pivots, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [[2.0000,-1.0000],[-1.0000,2.0000]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates solution of a Hermitian positive definite complex system
        //
        complex_1d_array x;
        densesolverreport rep;
        bool isupper = true;

        //
        // First, solve A*x=b with a feature-rich hpdmatrixsolve() which supports iterative improvement
        // and condition number estimation
        //
        complex_2d_array a = "[[4,1i],[-1i,4]]";
        complex_1d_array b = "[6,-9i]";
        hpdmatrixsolve(a, isupper, b, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [1.0000, -2.0000i]

        //
        // Then, solve C*x=d with hpdmatrixsolvefast() which has lower overhead
        //
        complex_2d_array c = "[[3,-1i],[1i,3]]";
        complex_1d_array d = "[-2i,-2]";
        hpdmatrixsolvefast(c, isupper, d);
        printf("%s\n", d.tostring(4).c_str()); // EXPECTED: [-1.0000i, -1.0000]

        //
        // Sometimes you have Cholesky decomposition of the system matrix readily
        // available. In such cases it is possible to save a lot of time by
        // passing precomputed Cholesky factor to hpdmatrixcholeskysolve(). The only
        // downside of such approach is that iterative refinement is unavailable
        // because original (unmodified) form of the system matrix is unknown
        // to ALGLIB.
        //
        complex_2d_array e = "[[3,2],[2,3]]";
        complex_1d_array f = "[4,1]";
        hpdmatrixcholesky(e, isupper);
        hpdmatrixcholeskysolve(e, isupper, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [2.0000, -1.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_1d_array x;
        densesolverlsreport rep;
        real_2d_array a = "[[4,2],[-1,3],[6,5]]";
        real_1d_array b = "[8,5,16]";
        rmatrixsolvels(a, b, 0.0, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [1.0000, 2.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates solution of a dense real linear system
        //
        real_1d_array x;
        integer_1d_array pivots;
        densesolverreport rep;

        //
        // First, solve A*x=b with a feature-rich rmatrixsolve() which supports iterative improvement
        // and condition number estimation
        //
        real_2d_array a = "[[4,2],[-1,3]]";
        real_1d_array b = "[8,5]";
        rmatrixsolve(a, b, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [1.0000, 2.0000]

        //
        // Then, solve C*x=d with rmatrixsolvefast() which has lower overhead
        //
        real_2d_array c = "[[3,1],[2,4]]";
        real_1d_array d = "[2,-2]";
        rmatrixsolvefast(c, d);
        printf("%s\n", d.tostring(4).c_str()); // EXPECTED: [1.0000, -1.0000]

        //
        // Sometimes you have LU decomposition of the system matrix readily
        // available. In such cases it is possible to save a lot of time by
        // passing precomputed LU factors to rmatrixlusolve(). The only
        // downside of such approach is that iterative refinement is unavailable
        // because original (unmodified) form of the system matrix is unknown
        // to ALGLIB.
        //
        // However, if you have BOTH original matrix and its LU decomposition,
        // it is possible to use rmatrixmixedsolve() which accepts both matrix
        // itself and its factors, and uses original matrix to refine solution
        // obtained with LU factors.
        //
        real_2d_array e = "[[3,4],[2,4]]";
        real_2d_array lue = "[[3,4],[2,4]]";
        real_1d_array f = "[2,0]";
        rmatrixlu(lue, pivots);
        rmatrixlusolve(lue, pivots, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [2.0000, -1.0000]

        rmatrixmixedsolve(e, lue, pivots, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [2.0000, -1.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates solution of a dense real matrix system
        //
        real_2d_array x;
        integer_1d_array pivots;
        densesolverreport rep;

        //
        // First, solve A*X=B with a feature-rich rmatrixsolvem() which supports
        // iterative improvement and condition number estimation. Here A is
        // an N*N matrix, X is an N*M matrix, B is an N*M matrix.
        //
        real_2d_array a = "[[4,2],[-1,3]]";
        real_2d_array b = "[[8,10,4],[5,1,-1]]";
        rmatrixsolvem(a, b, true, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [[1.0000, 2.0000,1.0000],[2.0000,1.0000,0.0000]]

        //
        // Then, solve C*X=D with rmatrixsolvemfast() which has lower overhead
        // due to condition number estimation and iterative refinement parts
        // being dropped.
        //
        real_2d_array c = "[[3,1],[2,4]]";
        real_2d_array d = "[[2,1],[-2,4]]";
        rmatrixsolvemfast(c, d);
        printf("%s\n", d.tostring(4).c_str()); // EXPECTED: [[1.0000,0.0000],[-1.0000,1.0000]]

        //
        // Sometimes you have LU decomposition of the system matrix readily
        // available. In such cases it is possible to save a lot of time by
        // passing precomputed LU factors to rmatrixlusolve(). The only
        // downside of such approach is that iterative refinement is unavailable
        // because original (unmodified) form of the system matrix is unknown
        // to ALGLIB.
        //
        // However, if you have BOTH original matrix and its LU decomposition,
        // it is possible to use rmatrixmixedsolve() which accepts both matrix
        // itself and its factors, and uses original matrix to refine solution
        // obtained with LU factors.
        //
        real_2d_array e = "[[3,4],[2,4]]";
        real_2d_array lue = "[[3,4],[2,4]]";
        real_2d_array f = "[[2,5],[0,6]]";
        rmatrixlu(lue, pivots);
        rmatrixlusolvem(lue, pivots, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [[2.0000,-1.0000],[-1.0000,2.0000]]

        rmatrixmixedsolvem(e, lue, pivots, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [[2.0000,-1.0000],[-1.0000,2.0000]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates solution of a symmetric positive definite real system
        //
        real_1d_array x;
        densesolverreport rep;
        bool isupper = true;

        //
        // First, solve A*x=b with a feature-rich spdmatrixsolve() which supports iterative improvement
        // and condition number estimation
        //
        real_2d_array a = "[[4,1],[1,4]]";
        real_1d_array b = "[6,9]";
        spdmatrixsolve(a, isupper, b, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [1.0000, 2.0000]

        //
        // Then, solve C*x=d with spdmatrixsolvefast() which has lower overhead
        //
        real_2d_array c = "[[3,1],[1,3]]";
        real_1d_array d = "[2,-2]";
        spdmatrixsolvefast(c, isupper, d);
        printf("%s\n", d.tostring(4).c_str()); // EXPECTED: [1.0000, -1.0000]

        //
        // Sometimes you have Cholesky decomposition of the system matrix readily
        // available. In such cases it is possible to save a lot of time by
        // passing precomputed Cholesky factor to spdmatrixcholeskysolve(). The only
        // downside of such approach is that iterative refinement is unavailable
        // because original (unmodified) form of the system matrix is unknown
        // to ALGLIB.
        //
        real_2d_array e = "[[3,2],[2,3]]";
        real_1d_array f = "[4,1]";
        spdmatrixcholesky(e, isupper);
        spdmatrixcholeskysolve(e, isupper, f, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [2.0000, -1.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

sparselusolve
sparsesolve
sparsesolvelsreg
sparsespdcholeskysolve
sparsespdsolve
sparsespdsolvesks
solvesks_d_1 Solving low profile positive definite sparse systems with Skyline (SKS) solver
sparse_solve Solving general sparse linear systems
sparse_solve_cholesky Solving positive definite sparse linear systems with the supernodal Cholesky solver
/************************************************************************* Sparse linear solver for A*x=b with general (nonsymmetric) N*N sparse real matrix A given by its LU factorization, N*1 vectors x and b. IMPORTANT: this solver requires input matrix to be in the CRS sparse storage format. An exception will be generated if you pass matrix in some other format (HASH or SKS). INPUT PARAMETERS A - LU factorization of the sparse matrix, must be NxN exactly in CRS storage format P, Q - pivot indexes from LU factorization N - size of A, N>0 B - array[0..N-1], right part OUTPUT PARAMETERS X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros Rep - solver report, following fields are set: * rep.terminationtype - solver status; >0 for success, set to -3 on failure (degenerate system). -- ALGLIB -- Copyright 26.12.2017 by Bochkanov Sergey *************************************************************************/
void sparselusolve(const sparsematrix &a, const integer_1d_array &p, const integer_1d_array &q, const real_1d_array &b, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse linear solver for A*x=b with general (nonsymmetric) N*N sparse real matrix A, N*1 vectors x and b. This function internally uses several solvers: * supernodal solver with static pivoting applied to a 2N*2N regularized augmented system, followed by iterative refinement. This solver is a recommended option because it provides the best speed and has the lowest memory requirements. * sparse LU with dynamic pivoting for stability. Provides better accuracy at the cost of a significantly lower performance. Recommended only for extremely unstable problems. INPUT PARAMETERS A - sparse matrix, must be NxN exactly, any storage format B - array[N], right part SolverType- solver type to use: * 0 use the best solver. It is augmented system in the current version, but may change in future releases * 10 use 'default profile' of the supernodal solver with static pivoting. The 'default' profile is intended for systems with plenty of memory; it is optimized for the best convergence at the cost of increased RAM usage. Recommended option. * 11 use 'limited memory' profile of the supernodal solver with static pivoting. The limited-memory profile is intended for problems with millions of variables. On most systems it has the same convergence as the default profile, having somewhat worse results only for ill-conditioned systems. * 20 use sparse LU with dynamic pivoting for stability. Not intended for large-scale problems. OUTPUT PARAMETERS X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros Rep - solver report, following fields are set: * rep.terminationtype - solver status; >0 for success, set to -3 on failure (degenerate system). -- ALGLIB -- Copyright 18.11.2023 by Bochkanov Sergey *************************************************************************/
void sparsesolve(const sparsematrix &a, const real_1d_array &b, const ae_int_t solvertype, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault); void sparsesolve(const sparsematrix &a, const real_1d_array &b, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Sparse linear least squares solver for A*x=b with general (nonsymmetric) N*N sparse real matrix A, N*1 vectors x and b. This function solves a regularized linear least squares problem of the form ( ) min ( |Ax-b|^2 + reg*|x|^2 ), with reg>=sqrt(MachineAccuracy) ( ) The function internally uses supernodal solver to solve an augmented- regularized sparse system. The solver, which was initially used to solve sparse square system, can also be used to solve rectangular systems, provided that the system is regularized with regularizing coefficient at least sqrt(MachineAccuracy), which is ~10^8 (double precision). It can be used to solve both full rank and rank deficient systems. INPUT PARAMETERS A - sparse MxN matrix, any storage format B - array[M], right part Reg - regularization coefficient, Reg>=sqrt(MachineAccuracy), lower values will be silently increased. SolverType- solver type to use: * 0 use the best solver. It is augmented system in the current version, but may change in future releases * 10 use 'default profile' of the supernodal solver with static pivoting. The 'default' profile is intended for systems with plenty of memory; it is optimized for the best convergence at the cost of increased RAM usage. Recommended option. * 11 use 'limited memory' profile of the supernodal solver with static pivoting. The limited-memory profile is intended for problems with millions of variables. On most systems it has the same convergence as the default profile, having somewhat worse results only for ill-conditioned systems. OUTPUT PARAMETERS X - array[N], least squares solution Rep - solver report, following fields are set: * rep.terminationtype - solver status; >0 for success. Present version of the solver does NOT returns negative completion codes because it does not fail. However, future ALGLIB versions may include solvers which return negative completion codes. -- ALGLIB -- Copyright 18.11.2023 by Bochkanov Sergey *************************************************************************/
void sparsesolvelsreg(const sparsematrix &a, const real_1d_array &b, const double reg, const ae_int_t solvertype, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault); void sparsesolvelsreg(const sparsematrix &a, const real_1d_array &b, const double reg, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse linear solver for A*x=b with N*N real symmetric positive definite matrix A given by its Cholesky decomposition, and N*1 vectors x and b. IMPORTANT: this solver requires input matrix to be in the SKS (Skyline) or CRS (compressed row storage) format. An exception will be generated if you pass matrix in some other format. INPUT PARAMETERS A - sparse NxN matrix stored in CRs or SKS format, must be NxN exactly IsUpper - which half of A is provided (another half is ignored) B - array[N], right part OUTPUT PARAMETERS X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros Rep - solver report, following fields are set: * rep.terminationtype - solver status; >0 for success, set to -3 on failure (degenerate or non-SPD system). -- ALGLIB -- Copyright 26.12.2017 by Bochkanov Sergey *************************************************************************/
void sparsespdcholeskysolve(const sparsematrix &a, const bool isupper, const real_1d_array &b, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Sparse linear solver for A*x=b with N*N sparse real symmetric positive definite matrix A, N*1 vectors x and b. This solver converts input matrix to CRS format, performs Cholesky factorization using supernodal Cholesky decomposition with permutation- reducing ordering and uses sparse triangular solver to get solution of the original system. INPUT PARAMETERS A - sparse matrix, must be NxN exactly. Can be stored in any sparse storage format, CRS is preferred. IsUpper - which half of A is provided (another half is ignored). It is better to store the lower triangle because it allows us to avoid one transposition during internal conversion. B - array[N], right part OUTPUT PARAMETERS X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros Rep - solver report, following fields are set: * rep.terminationtype - solver status; >0 for success, set to -3 on failure (degenerate or non-SPD system). -- ALGLIB -- Copyright 26.12.2017 by Bochkanov Sergey *************************************************************************/
void sparsespdsolve(const sparsematrix &a, const bool isupper, const real_1d_array &b, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Sparse linear solver for A*x=b with N*N sparse real symmetric positive definite matrix A, N*1 vectors x and b. This solver converts input matrix to SKS format, performs Cholesky factorization using SKS Cholesky subroutine (works well for limited bandwidth matrices) and uses sparse triangular solvers to get solution of the original system. IMPORTANT: this function is intended for low profile (variable band) linear systems with dense or nearly-dense bands. Only in such cases it provides some performance improvement over more general sparsrspdsolve(). If your system has high bandwidth or sparse band, the general sparsrspdsolve() is likely to be more efficient. INPUT PARAMETERS A - sparse matrix, must be NxN exactly IsUpper - which half of A is provided (another half is ignored) B - array[0..N-1], right part OUTPUT PARAMETERS X - array[N], it contains: * rep.terminationtype>0 => solution * rep.terminationtype=-3 => filled by zeros Rep - solver report, following fields are set: * rep.terminationtype - solver status; >0 for success, set to -3 on failure (degenerate or non-SPD system). -- ALGLIB -- Copyright 26.12.2017 by Bochkanov Sergey *************************************************************************/
void sparsespdsolvesks(const sparsematrix &a, const bool isupper, const real_1d_array &b, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates creation/initialization of the sparse matrix
        // in the SKS (Skyline) storage format and solution using SKS-based direct
        // solver.
        //
        // NOTE: the SKS solver is intended for 'easy' tasks, i.e. low-profile positive
        //       definite systems (e.g. matrices with average bandwidth as low as 3),
        //       where it can avoid some overhead associated with more powerful supernodal
        //       Cholesky solver with AMD ordering.
        //
        //       It is recommended to use more powerful solvers for more difficult problems:
        //       * sparsespdsolve() for larger sparse positive definite systems
        //       * sparsesolve() for general (nonsymmetric) linear systems
        //
        // First, we have to create matrix and initialize it. Matrix is created
        // in the SKS format, using fixed bandwidth initialization function.
        // Several points should be noted:
        //
        // 1. SKS sparse storage format also allows variable bandwidth matrices;
        //    we just do not want to overcomplicate this example.
        //
        // 2. SKS format requires you to specify matrix geometry prior to
        //    initialization of its elements with sparseset(). If you specified
        //    bandwidth=1, you can not change your mind afterwards and call
        //    sparseset() for non-existent elements.
        // 
        // 3. Because SKS solver need just one triangle of SPD matrix, we can
        //    omit initialization of the lower triangle of our matrix.
        //
        ae_int_t n = 4;
        ae_int_t bandwidth = 1;
        sparsematrix s;
        sparsecreatesksband(n, n, bandwidth, s);
        sparseset(s, 0, 0, 2.0);
        sparseset(s, 0, 1, 1.0);
        sparseset(s, 1, 1, 3.0);
        sparseset(s, 1, 2, 1.0);
        sparseset(s, 2, 2, 3.0);
        sparseset(s, 2, 3, 1.0);
        sparseset(s, 3, 3, 2.0);

        //
        // Now we have symmetric positive definite 4x4 system width bandwidth=1:
        //
        //     [ 2 1     ]   [ x0]]   [  4 ]
        //     [ 1 3 1   ]   [ x1 ]   [ 10 ]
        //     [   1 3 1 ] * [ x2 ] = [ 15 ]
        //     [     1 2 ]   [ x3 ]   [ 11 ]
        //
        // After successful creation we can call SKS solver.
        //
        real_1d_array b = "[4,10,15,11]";
        sparsesolverreport rep;
        real_1d_array x;
        bool isuppertriangle = true;
        sparsespdsolvesks(s, isuppertriangle, b, x, rep);
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [1.0000, 2.0000, 3.0000, 4.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates creation/initialization of a sparse matrix and linear
        // system solution using a direct solver. This solver can handle any problem sizes
        // - from several tens of variables to millions of variables.
        //
        // First, we create a sparse matrix in the flexible hash table-based storage format,
        // initialize it and convert to the CRS format.
        //
        ae_int_t n = 4;
        sparsematrix s;
        sparsecreate(n, n, 0, s);
        sparseset(s, 0, 0, 2.0);
        sparseset(s, 0, 1, 1.0);
        sparseset(s, 1, 0, 1.0);
        sparseset(s, 1, 1, 3.0);
        sparseset(s, 1, 2, -1.0);
        sparseset(s, 2, 2, 3.0);
        sparseset(s, 2, 3, 1.0);
        sparseset(s, 3, 2, 1.0);
        sparseset(s, 3, 3, 2.0);

        //
        // Now we have symmetric positive definite 4x4 system
        //
        //     [ 2 1     ]   [ x0]]   [ 3 ]
        //     [ 1 3 -1  ]   [ x1 ]   [ 2 ]
        //     [     3 1 ] * [ x2 ] = [ 8 ]
        //     [     1 2 ]   [ x3 ]   [ 6 ]
        //
        // Now, it is time to call the solver. The sparsesolve() function supports several
        // solvers, our recommendation is to choose the default one. In the current version
        // it is a supernodal solver with static pivoting, followed by the iterative refinement.
        //
        real_1d_array b = "[3,2,8,6]";
        sparsesolverreport rep;
        real_1d_array x;
        ae_int_t solvertype = 0;
        sparsesolve(s, b, solvertype, x, rep);
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [1.0000, 1.0000, 2.0000, 2.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates creation/initialization of a sparse matrix and linear
        // system solution using Cholesky-based direct solver. This solver can handle any
        // problem sizes - from several tens of variables to millions of variables.
        //
        // First, we create a sparse matrix in the flexible hash table-based storage format,
        // initialize it and convert to the CRS format. Because the matrix is symmetric,
        // it is enough to specify only one triangle. The example below initializes the
        // lower one.
        //
        ae_int_t n = 4;
        sparsematrix s;
        sparsecreate(n, n, 0, s);
        sparseset(s, 0, 0, 2.0);
        sparseset(s, 1, 0, 1.0);
        sparseset(s, 1, 1, 3.0);
        sparseset(s, 2, 1, 1.0);
        sparseset(s, 2, 2, 3.0);
        sparseset(s, 3, 2, 1.0);
        sparseset(s, 3, 3, 2.0);

        //
        // Now we have symmetric positive definite 4x4 system
        //
        //     [ 2 1     ]   [ x0]]   [  4 ]
        //     [ 1 3 1   ]   [ x1 ]   [ 10 ]
        //     [   1 3 1 ] * [ x2 ] = [ 15 ]
        //     [     1 2 ]   [ x3 ]   [ 11 ]
        //
        // Now, it is time to call the solver.
        //
        real_1d_array b = "[4,10,15,11]";
        sparsesolverreport rep;
        real_1d_array x;
        bool isuppertriangle = false;
        sparsespdsolve(s, isuppertriangle, b, x, rep);
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [1.0000, 2.0000, 3.0000, 4.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

ellipticintegrale
ellipticintegralk
ellipticintegralkhighprecision
incompleteellipticintegrale
incompleteellipticintegralk
/************************************************************************* Complete elliptic integral of the second kind Approximates the integral pi/2 - | | 2 E(m) = | sqrt( 1 - m sin t ) dt | | - 0 using the approximation P(x) - x log x Q(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 1 10000 2.1e-16 7.3e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
double ellipticintegrale(const double m, const xparams _xparams = alglib::xdefault);
/************************************************************************* Complete elliptic integral of the first kind Approximates the integral pi/2 - | | | dt K(m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 using the approximation P(x) - log x Q(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 2.5e-16 6.8e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double ellipticintegralk(const double m, const xparams _xparams = alglib::xdefault);
/************************************************************************* Complete elliptic integral of the first kind Approximates the integral pi/2 - | | | dt K(m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 where m = 1 - m1, using the approximation P(x) - log x Q(x). The argument m1 is used rather than m so that the logarithmic singularity at m = 1 will be shifted to the origin; this preserves maximum accuracy. K(0) = pi/2. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 2.5e-16 6.8e-17 Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double ellipticintegralkhighprecision(const double m1, const xparams _xparams = alglib::xdefault);
/************************************************************************* Incomplete elliptic integral of the second kind Approximates the integral phi - | | | 2 E(phi_\m) = | sqrt( 1 - m sin t ) dt | | | - 0 of amplitude phi and modulus m, using the arithmetic - geometric mean algorithm. ACCURACY: Tested at random arguments with phi in [-10, 10] and m in [0, 1]. Relative error: arithmetic domain # trials peak rms IEEE -10,10 150000 3.3e-15 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1993, 2000 by Stephen L. Moshier *************************************************************************/
double incompleteellipticintegrale(const double phi, const double m, const xparams _xparams = alglib::xdefault);
/************************************************************************* Incomplete elliptic integral of the first kind F(phi|m) Approximates the integral phi - | | | dt F(phi_\m) = | ------------------ | 2 | | sqrt( 1 - m sin t ) - 0 of amplitude phi and modulus m, using the arithmetic - geometric mean algorithm. ACCURACY: Tested at random points with m in [0, 1] and phi as indicated. Relative error: arithmetic domain # trials peak rms IEEE -10,10 200000 7.4e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double incompleteellipticintegralk(const double phi, const double m, const xparams _xparams = alglib::xdefault);
eigsubspacereport
eigsubspacestate
eigsubspacecreate
eigsubspacecreatebuf
eigsubspaceooccontinue
eigsubspaceoocgetrequestdata
eigsubspaceoocgetrequestinfo
eigsubspaceoocsendresult
eigsubspaceoocstart
eigsubspaceoocstop
eigsubspacesetcond
eigsubspacesetwarmstart
eigsubspacesolvedenses
eigsubspacesolvesparses
hmatrixevd
hmatrixevdi
hmatrixevdr
rmatrixevd
smatrixevd
smatrixevdi
smatrixevdr
smatrixtdevd
smatrixtdevdi
smatrixtdevdr
/************************************************************************* This object stores state of the subspace iteration algorithm. You should use ALGLIB functions to work with this object. *************************************************************************/
class eigsubspacereport { public: eigsubspacereport(); eigsubspacereport(const eigsubspacereport &rhs); eigsubspacereport& operator=(const eigsubspacereport &rhs); virtual ~eigsubspacereport(); ae_int_t iterationscount; };
/************************************************************************* This object stores state of the subspace iteration algorithm. You should use ALGLIB functions to work with this object. *************************************************************************/
class eigsubspacestate { public: eigsubspacestate(); eigsubspacestate(const eigsubspacestate &rhs); eigsubspacestate& operator=(const eigsubspacestate &rhs); virtual ~eigsubspacestate(); };
/************************************************************************* This function initializes subspace iteration solver. This solver is used to solve symmetric real eigenproblems where just a few (top K) eigenvalues and corresponding eigenvectors is required. This solver can be significantly faster than complete EVD decomposition in the following case: * when only just a small fraction of top eigenpairs of dense matrix is required. When K approaches N, this solver is slower than complete dense EVD * when problem matrix is sparse (and/or is not known explicitly, i.e. only matrix-matrix product can be performed) USAGE (explicit dense/sparse matrix): 1. User initializes algorithm state with eigsubspacecreate() call 2. [optional] User tunes solver parameters by calling eigsubspacesetcond() or other functions 3. User calls eigsubspacesolvedense() or eigsubspacesolvesparse() methods, which take algorithm state and 2D array or alglib.sparsematrix object. USAGE (out-of-core mode): 1. User initializes algorithm state with eigsubspacecreate() call 2. [optional] User tunes solver parameters by calling eigsubspacesetcond() or other functions 3. User activates out-of-core mode of the solver and repeatedly calls communication functions in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: N - problem dimensionality, N>0 K - number of top eigenvector to calculate, 0<K<=N. OUTPUT PARAMETERS: State - structure which stores algorithm state NOTE: if you solve many similar EVD problems you may find it useful to reuse previous subspace as warm-start point for new EVD problem. It can be done with eigsubspacesetwarmstart() function. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspacecreate(const ae_int_t n, const ae_int_t k, eigsubspacestate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Buffered version of constructor which aims to reuse previously allocated memory as much as possible. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspacecreatebuf(const ae_int_t n, const ae_int_t k, eigsubspacestate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs subspace iteration in the out-of-core mode. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
bool eigsubspaceooccontinue(eigsubspacestate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to retrieve information about out-of-core request sent by solver to user code: matrix X (array[N,RequestSize) which have to be multiplied by out-of-core matrix A in a product A*X. This function returns just request data; in order to get size of the data prior to processing requestm, use eigsubspaceoocgetrequestinfo(). It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver running in out-of-core mode X - possibly preallocated storage; reallocated if needed, left unchanged, if large enough to store request data. OUTPUT PARAMETERS: X - array[N,RequestSize] or larger, leading rectangle is filled with dense matrix X. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspaceoocgetrequestdata(eigsubspacestate &state, real_2d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to retrieve information about out-of-core request sent by solver to user code: request type (current version of the solver sends only requests for matrix-matrix products) and request size (size of the matrices being multiplied). This function returns just request metrics; in order to get contents of the matrices being multiplied, use eigsubspaceoocgetrequestdata(). It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver running in out-of-core mode OUTPUT PARAMETERS: RequestType - type of the request to process: * 0 - for matrix-matrix product A*X, with A being NxN matrix whose eigenvalues/vectors are needed, and X being NxREQUESTSIZE one which is returned by the eigsubspaceoocgetrequestdata(). RequestSize - size of the X matrix (number of columns), usually it is several times larger than number of vectors K requested by user. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspaceoocgetrequestinfo(eigsubspacestate &state, ae_int_t &requesttype, ae_int_t &requestsize, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to send user reply to out-of-core request sent by solver. Usually it is product A*X for returned by solver matrix X. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver running in out-of-core mode AX - array[N,RequestSize] or larger, leading rectangle is filled with product A*X. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspaceoocsendresult(eigsubspacestate &state, const real_2d_array &ax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function initiates out-of-core mode of subspace eigensolver. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver object MType - matrix type and solver mode: * 0 = real symmetric matrix A, products of the form A*X are computed. At every step the basis of the invariant subspace is reorthogonalized with LQ decomposition which makes the algo more robust. The first mode introduced in ALGLIB, the most precise and robust. However, it is suboptimal for easy problems which can be solved in 3-5 iterations without LQ step. * 1 = real symmetric matrix A, products of the form A*X are computed. The invariant subspace is NOT reorthogonalized, no error checks. The solver stops after specified number of iterations which should be small, 5 at most. This mode is intended for easy problems with extremely fast convergence. Future versions of ALGLIB may introduce support for other matrix types; for now, only symmetric eigenproblems are supported. -- ALGLIB -- Copyright 07.06.2023 by Bochkanov Sergey *************************************************************************/
void eigsubspaceoocstart(eigsubspacestate &state, const ae_int_t mtype, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function finalizes out-of-core mode of subspace eigensolver. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like below: > alglib.eigsubspaceoocstart(state) > while alglib.eigsubspaceooccontinue(state) do > alglib.eigsubspaceoocgetrequestinfo(state, out RequestType, out M) > alglib.eigsubspaceoocgetrequestdata(state, out X) > [calculate Y=A*X, with X=R^NxM] > alglib.eigsubspaceoocsendresult(state, in Y) > alglib.eigsubspaceoocstop(state, out W, out Z, out Report) INPUT PARAMETERS: State - solver state OUTPUT PARAMETERS: W - array[K], depending on solver settings: * top K eigenvalues ordered by descending - if eigenvectors are returned in Z * zeros - if invariant subspace is returned in Z Z - array[N,K], depending on solver settings either: * matrix of eigenvectors found * orthogonal basis of K-dimensional invariant subspace Rep - report with additional parameters -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspaceoocstop(eigsubspacestate &state, real_1d_array &w, real_2d_array &z, eigsubspacereport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping critera for the solver: * error in eigenvector/value allowed by solver * maximum number of iterations to perform INPUT PARAMETERS: State - solver structure Eps - eps>=0, with non-zero value used to tell solver that it can stop after all eigenvalues converged with error roughly proportional to eps*MAX(LAMBDA_MAX), where LAMBDA_MAX is a maximum eigenvalue. Zero value means that no check for precision is performed. MaxIts - maxits>=0, with non-zero value used to tell solver that it can stop after maxits steps (no matter how precise current estimate is) NOTE: passing eps=0 and maxits=0 results in automatic selection of moderate eps as stopping criteria (1.0E-6 in current implementation, but it may change without notice). NOTE: very small values of eps are possible (say, 1.0E-12), although the larger problem you solve (N and/or K), the harder it is to find precise eigenvectors because rounding errors tend to accumulate. NOTE: passing non-zero eps results in some performance penalty, roughly equal to 2N*(2K)^2 FLOPs per iteration. These additional computations are required in order to estimate current error in eigenvalues via Rayleigh-Ritz process. Most of this additional time is spent in construction of ~2Kx2K symmetric subproblem whose eigenvalues are checked with exact eigensolver. This additional time is negligible if you search for eigenvalues of the large dense matrix, but may become noticeable on highly sparse EVD problems, where cost of matrix-matrix product is low. If you set eps to exactly zero, Rayleigh-Ritz phase is completely turned off. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspacesetcond(eigsubspacestate &state, const double eps, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets warm-start mode of the solver: next call to the solver will reuse previous subspace as warm-start point. It can significantly speed-up convergence when you solve many similar eigenproblems. INPUT PARAMETERS: State - solver structure UseWarmStart- either True or False -- ALGLIB -- Copyright 12.11.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspacesetwarmstart(eigsubspacestate &state, const bool usewarmstart, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function runs subspace eigensolver for dense NxN symmetric matrix A, given by its upper or lower triangle. This function can not process nonsymmetric matrices. INPUT PARAMETERS: State - solver state A - array[N,N], symmetric NxN matrix given by one of its triangles IsUpper - whether upper or lower triangle of A is given (the other one is not referenced at all). OUTPUT PARAMETERS: W - array[K], top K eigenvalues ordered by descending of their absolute values Z - array[N,K], matrix of eigenvectors found Rep - report with additional parameters NOTE: internally this function allocates a copy of NxN dense A. You should take it into account when working with very large matrices occupying almost all RAM. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspacesolvedenses(eigsubspacestate &state, const real_2d_array &a, const bool isupper, real_1d_array &w, real_2d_array &z, eigsubspacereport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function runs eigensolver for dense NxN symmetric matrix A, given by upper or lower triangle. This function can not process nonsymmetric matrices. INPUT PARAMETERS: State - solver state A - NxN symmetric matrix given by one of its triangles IsUpper - whether upper or lower triangle of A is given (the other one is not referenced at all). OUTPUT PARAMETERS: W - array[K], top K eigenvalues ordered by descending of their absolute values Z - array[N,K], matrix of eigenvectors found Rep - report with additional parameters -- ALGLIB -- Copyright 16.01.2017 by Bochkanov Sergey *************************************************************************/
void eigsubspacesolvesparses(eigsubspacestate &state, const sparsematrix &a, const bool isupper, real_1d_array &w, real_2d_array &z, eigsubspacereport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Finding the eigenvalues and eigenvectors of a Hermitian matrix The algorithm finds eigen pairs of a Hermitian matrix by reducing it to real tridiagonal form and using the QL/QR algorithm. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains the eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in the matrix columns. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). Note: eigenvectors of Hermitian matrix are defined up to multiplication by a complex number L, such that |L|=1. -- ALGLIB -- Copyright 2005, 23 March 2007 by Bochkanov Sergey *************************************************************************/
bool hmatrixevd(const complex_2d_array &a, const ae_int_t n, const ae_int_t zneeded, const bool isupper, real_1d_array &d, complex_2d_array &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* Subroutine for finding the eigenvalues and eigenvectors of a Hermitian matrix with given indexes by using bisection and inverse iteration methods Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Output parameters: W - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..I2-I1]. In that case, the eigenvectors are stored in the matrix columns. Result: True, if successful. W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. Note: eigen vectors of Hermitian matrix are defined up to multiplication by a complex number L, such as |L|=1. -- ALGLIB -- Copyright 07.01.2006, 24.03.2007 by Bochkanov Sergey. *************************************************************************/
bool hmatrixevdi(const complex_2d_array &a, const ae_int_t n, const ae_int_t zneeded, const bool isupper, const ae_int_t i1, const ae_int_t i2, real_1d_array &w, complex_2d_array &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* Subroutine for finding the eigenvalues (and eigenvectors) of a Hermitian matrix in a given half-interval (A, B] by using a bisection and inverse iteration Input parameters: A - Hermitian matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. B1, B2 - half-interval (B1, B2] to search eigenvalues in. Output parameters: M - number of eigenvalues found in a given half-interval, M>=0 W - array of the eigenvalues found. Array whose index ranges within [0..M-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..M-1]. The eigenvectors are stored in the matrix columns. Result: True, if successful. M contains the number of eigenvalues in the given half-interval (could be equal to 0), W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. Note: eigen vectors of Hermitian matrix are defined up to multiplication by a complex number L, such as |L|=1. -- ALGLIB -- Copyright 07.01.2006, 24.03.2007 by Bochkanov Sergey. *************************************************************************/
bool hmatrixevdr(const complex_2d_array &a, const ae_int_t n, const ae_int_t zneeded, const bool isupper, const double b1, const double b2, ae_int_t &m, real_1d_array &w, complex_2d_array &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* Finding eigenvalues and eigenvectors of a general (unsymmetric) matrix ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. The algorithm finds eigenvalues and eigenvectors of a general matrix by using the QR algorithm with multiple shifts. The algorithm can find eigenvalues and both left and right eigenvectors. The right eigenvector is a vector x such that A*x = w*x, and the left eigenvector is a vector y such that y'*A = w*y' (here y' implies a complex conjugate transposition of vector y). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. VNeeded - flag controlling whether eigenvectors are needed or not. If VNeeded is equal to: * 0, eigenvectors are not returned; * 1, right eigenvectors are returned; * 2, left eigenvectors are returned; * 3, both left and right eigenvectors are returned. Output parameters: WR - real parts of eigenvalues. Array whose index ranges within [0..N-1]. WR - imaginary parts of eigenvalues. Array whose index ranges within [0..N-1]. VL, VR - arrays of left and right eigenvectors (if they are needed). If WI[i]=0, the respective eigenvalue is a real number, and it corresponds to the column number I of matrices VL/VR. If WI[i]>0, we have a pair of complex conjugate numbers with positive and negative imaginary parts: the first eigenvalue WR[i] + sqrt(-1)*WI[i]; the second eigenvalue WR[i+1] + sqrt(-1)*WI[i+1]; WI[i]>0 WI[i+1] = -WI[i] < 0 In that case, the eigenvector corresponding to the first eigenvalue is located in i and i+1 columns of matrices VL/VR (the column number i contains the real part, and the column number i+1 contains the imaginary part), and the vector corresponding to the second eigenvalue is a complex conjugate to the first vector. Arrays whose indexes range within [0..N-1, 0..N-1]. Result: True, if the algorithm has converged. False, if the algorithm has not converged. Note 1: Some users may ask the following question: what if WI[N-1]>0? WI[N] must contain an eigenvalue which is complex conjugate to the N-th eigenvalue, but the array has only size N? The answer is as follows: such a situation cannot occur because the algorithm finds a pairs of eigenvalues, therefore, if WI[i]>0, I is strictly less than N-1. Note 2: The algorithm performance depends on the value of the internal parameter NS of the InternalSchurDecomposition subroutine which defines the number of shifts in the QR algorithm (similarly to the block width in block-matrix algorithms of linear algebra). If you require maximum performance on your machine, it is recommended to adjust this parameter manually. See also the InternalTREVC subroutine. The algorithm is based on the LAPACK 3.0 library. *************************************************************************/
bool rmatrixevd(const real_2d_array &a, const ae_int_t n, const ae_int_t vneeded, real_1d_array &wr, real_1d_array &wi, real_2d_array &vl, real_2d_array &vr, const xparams _xparams = alglib::xdefault);
/************************************************************************* Finding the eigenvalues and eigenvectors of a symmetric matrix The algorithm finds eigen pairs of a symmetric matrix by reducing it to tridiagonal form and using the QL/QR algorithm. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpper - storage format. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains the eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in the matrix columns. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged (rare case). -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/
bool smatrixevd(const real_2d_array &a, const ae_int_t n, const ae_int_t zneeded, const bool isupper, real_1d_array &d, real_2d_array &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* Subroutine for finding the eigenvalues and eigenvectors of a symmetric matrix with given indexes by using bisection and inverse iteration methods. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Output parameters: W - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..I2-I1]. In that case, the eigenvectors are stored in the matrix columns. Result: True, if successful. W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. -- ALGLIB -- Copyright 07.01.2006 by Bochkanov Sergey *************************************************************************/
bool smatrixevdi(const real_2d_array &a, const ae_int_t n, const ae_int_t zneeded, const bool isupper, const ae_int_t i1, const ae_int_t i2, real_1d_array &w, real_2d_array &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* Subroutine for finding the eigenvalues (and eigenvectors) of a symmetric matrix in a given half open interval (A, B] by using a bisection and inverse iteration ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array [0..N-1, 0..N-1]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. IsUpperA - storage format of matrix A. B1, B2 - half open interval (B1, B2] to search eigenvalues in. Output parameters: M - number of eigenvalues found in a given half-interval (M>=0). W - array of the eigenvalues found. Array whose index ranges within [0..M-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..M-1]. The eigenvectors are stored in the matrix columns. Result: True, if successful. M contains the number of eigenvalues in the given half-interval (could be equal to 0), W contains the eigenvalues, Z contains the eigenvectors (if needed). False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. -- ALGLIB -- Copyright 07.01.2006 by Bochkanov Sergey *************************************************************************/
bool smatrixevdr(const real_2d_array &a, const ae_int_t n, const ae_int_t zneeded, const bool isupper, const double b1, const double b2, ae_int_t &m, real_1d_array &w, real_2d_array &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* Finding the eigenvalues and eigenvectors of a tridiagonal symmetric matrix The algorithm finds the eigen pairs of a tridiagonal symmetric matrix by using an QL/QR algorithm with implicit shifts. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix A. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix; * 2, the eigenvectors of a tridiagonal matrix replace the square matrix Z; * 3, matrix Z contains the first row of the eigenvectors matrix. Z - if ZNeeded=1, Z contains the square matrix by which the eigenvectors are multiplied. Array whose indexes range within [0..N-1, 0..N-1]. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains the product of a given matrix (from the left) and the eigenvectors matrix (from the right); * 2, Z contains the eigenvectors. * 3, Z contains the first row of the eigenvectors matrix. If ZNeeded<3, Z is the array whose indexes range within [0..N-1, 0..N-1]. In that case, the eigenvectors are stored in the matrix columns. If ZNeeded=3, Z is the array whose indexes range within [0..0, 0..N-1]. Result: True, if the algorithm has converged. False, if the algorithm hasn't converged. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/
bool smatrixtdevd(real_1d_array &d, const real_1d_array &e, const ae_int_t n, const ae_int_t zneeded, real_2d_array &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* Subroutine for finding tridiagonal matrix eigenvalues/vectors with given indexes (in ascending order) by using the bisection and inverse iteraion. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix. N>=0. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix. * 2, the eigenvectors of a tridiagonal matrix replace matrix Z. I1, I2 - index interval for searching (from I1 to I2). 0 <= I1 <= I2 <= N-1. Z - if ZNeeded is equal to: * 0, Z isn't used and remains unchanged; * 1, Z contains the square matrix (array whose indexes range within [0..N-1, 0..N-1]) which reduces the given symmetric matrix to tridiagonal form; * 2, Z isn't used (but changed on the exit). Output parameters: D - array of the eigenvalues found. Array whose index ranges within [0..I2-I1]. Z - if ZNeeded is equal to: * 0, doesn't contain any information; * 1, contains the product of a given NxN matrix Z (from the left) and Nx(I2-I1) matrix of the eigenvectors found (from the right). Array whose indexes range within [0..N-1, 0..I2-I1]. * 2, contains the matrix of the eigenvalues found. Array whose indexes range within [0..N-1, 0..I2-I1]. Result: True, if successful. In that case, D contains the eigenvalues, Z contains the eigenvectors (if needed). It should be noted that the subroutine changes the size of arrays D and Z. False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned. -- ALGLIB -- Copyright 25.12.2005 by Bochkanov Sergey *************************************************************************/
bool smatrixtdevdi(real_1d_array &d, const real_1d_array &e, const ae_int_t n, const ae_int_t zneeded, const ae_int_t i1, const ae_int_t i2, real_2d_array &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* Subroutine for finding the tridiagonal matrix eigenvalues/vectors in a given half-interval (A, B] by using bisection and inverse iteration. Input parameters: D - the main diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-1]. E - the secondary diagonal of a tridiagonal matrix. Array whose index ranges within [0..N-2]. N - size of matrix, N>=0. ZNeeded - flag controlling whether the eigenvectors are needed or not. If ZNeeded is equal to: * 0, the eigenvectors are not needed; * 1, the eigenvectors of a tridiagonal matrix are multiplied by the square matrix Z. It is used if the tridiagonal matrix is obtained by the similarity transformation of a symmetric matrix. * 2, the eigenvectors of a tridiagonal matrix replace matrix Z. A, B - half-interval (A, B] to search eigenvalues in. Z - if ZNeeded is equal to: * 0, Z isn't used and remains unchanged; * 1, Z contains the square matrix (array whose indexes range within [0..N-1, 0..N-1]) which reduces the given symmetric matrix to tridiagonal form; * 2, Z isn't used (but changed on the exit). Output parameters: D - array of the eigenvalues found. Array whose index ranges within [0..M-1]. M - number of eigenvalues found in the given half-interval (M>=0). Z - if ZNeeded is equal to: * 0, doesn't contain any information; * 1, contains the product of a given NxN matrix Z (from the left) and NxM matrix of the eigenvectors found (from the right). Array whose indexes range within [0..N-1, 0..M-1]. * 2, contains the matrix of the eigenvectors found. Array whose indexes range within [0..N-1, 0..M-1]. Result: True, if successful. In that case, M contains the number of eigenvalues in the given half-interval (could be equal to 0), D contains the eigenvalues, Z contains the eigenvectors (if needed). It should be noted that the subroutine changes the size of arrays D and Z. False, if the bisection method subroutine wasn't able to find the eigenvalues in the given interval or if the inverse iteration subroutine wasn't able to find all the corresponding eigenvectors. In that case, the eigenvalues and eigenvectors are not returned, M is equal to 0. -- ALGLIB -- Copyright 31.03.2008 by Bochkanov Sergey *************************************************************************/
bool smatrixtdevdr(real_1d_array &d, const real_1d_array &e, const ae_int_t n, const ae_int_t zneeded, const double a, const double b, ae_int_t &m, real_2d_array &z, const xparams _xparams = alglib::xdefault);
exponentialintegralei
exponentialintegralen
/************************************************************************* Exponential integral Ei(x) x - t | | e Ei(x) = -|- --- dt . | | t - -inf Not defined for x <= 0. See also expn.c. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,100 50000 8.6e-16 1.3e-16 Cephes Math Library Release 2.8: May, 1999 Copyright 1999 by Stephen L. Moshier *************************************************************************/
double exponentialintegralei(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Exponential integral En(x) Evaluates the exponential integral inf. - | | -xt | e E (x) = | ---- dt. n | n | | t - 1 Both n and x must be nonnegative. The routine employs either a power series, a continued fraction, or an asymptotic formula depending on the relative values of n and x. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0, 30 10000 1.7e-15 3.6e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 2000 by Stephen L. Moshier *************************************************************************/
double exponentialintegralen(const double x, const ae_int_t n, const xparams _xparams = alglib::xdefault);
fcdistribution
fdistribution
invfdistribution
/************************************************************************* Complemented F distribution Returns the area from x to infinity under the F density function (also known as Snedcor's density or the variance ratio density). inf. - 1 | | a-1 b-1 1-P(x) = ------ | t (1-t) dt B(a,b) | | - x The incomplete beta integral is used, according to the formula P(x) = incbet( df2/2, df1/2, (df2/(df2 + df1*x) ). ACCURACY: Tested at random points (a,b,x) in the indicated intervals. x a,b Relative error: arithmetic domain domain # trials peak rms IEEE 0,1 1,100 100000 3.7e-14 5.9e-16 IEEE 1,5 1,100 100000 8.0e-15 1.6e-15 IEEE 0,1 1,10000 100000 1.8e-11 3.5e-13 IEEE 1,5 1,10000 100000 2.0e-11 3.0e-12 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double fcdistribution(const ae_int_t a, const ae_int_t b, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* F distribution Returns the area from zero to x under the F density function (also known as Snedcor's density or the variance ratio density). This is the density of x = (u1/df1)/(u2/df2), where u1 and u2 are random variables having Chi square distributions with df1 and df2 degrees of freedom, respectively. The incomplete beta integral is used, according to the formula P(x) = incbet( df1/2, df2/2, (df1*x/(df2 + df1*x) ). The arguments a and b are greater than zero, and x is nonnegative. ACCURACY: Tested at random points (a,b,x). x a,b Relative error: arithmetic domain domain # trials peak rms IEEE 0,1 0,100 100000 9.8e-15 1.7e-15 IEEE 1,5 0,100 100000 6.5e-15 3.5e-16 IEEE 0,1 1,10000 100000 2.2e-11 3.3e-12 IEEE 1,5 1,10000 100000 1.1e-11 1.7e-13 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double fdistribution(const ae_int_t a, const ae_int_t b, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse of complemented F distribution Finds the F density argument x such that the integral from x to infinity of the F density is equal to the given probability p. This is accomplished using the inverse beta integral function and the relations z = incbi( df2/2, df1/2, p ) x = df2 (1-z) / (df1 z). Note: the following relations hold for the inverse of the uncomplemented F distribution: z = incbi( df1/2, df2/2, p ) x = df2 z / (df1 (1-z)). ACCURACY: Tested at random points (a,b,p). a,b Relative error: arithmetic domain # trials peak rms For p between .001 and 1: IEEE 1,100 100000 8.3e-15 4.7e-16 IEEE 1,10000 100000 2.1e-11 1.4e-13 For p between 10^-6 and 10^-3: IEEE 1,100 50000 1.3e-12 8.4e-15 IEEE 1,10000 50000 3.0e-12 4.8e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double invfdistribution(const ae_int_t a, const ae_int_t b, const double y, const xparams _xparams = alglib::xdefault);
fftc1d
fftc1dinv
fftr1d
fftr1dbuf
fftr1dinv
fftr1dinvbuf
fft_complex_d1 Complex FFT: simple example
fft_complex_d2 Complex FFT: advanced example
fft_real_d1 Real FFT: simple example
fft_real_d2 Real FFT: advanced example
/************************************************************************* 1-dimensional complex FFT. Array size N may be arbitrary number (composite or prime). Composite N's are handled with cache-oblivious variation of a Cooley-Tukey algorithm. Small prime-factors are transformed using hard coded codelets (similar to FFTW codelets, but without low-level optimization), large prime-factors are handled with Bluestein's algorithm. Fastests transforms are for smooth N's (prime factors are 2, 3, 5 only), most fast for powers of 2. When N have prime factors larger than these, but orders of magnitude smaller than N, computations will be about 4 times slower than for nearby highly composite N's. When N itself is prime, speed will be 6 times lower. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - complex function to be transformed N - problem size OUTPUT PARAMETERS A - DFT of a input array, array[0..N-1] A_out[j] = SUM(A_in[k]*exp(-2*pi*sqrt(-1)*j*k/N), k = 0..N-1) -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/
void fftc1d(complex_1d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault); void fftc1d(complex_1d_array &a, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* 1-dimensional complex inverse FFT. Array size N may be arbitrary number (composite or prime). Algorithm has O(N*logN) complexity for any N (composite or prime). See FFTC1D() description for more information about algorithm performance. INPUT PARAMETERS A - array[0..N-1] - complex array to be transformed N - problem size OUTPUT PARAMETERS A - inverse DFT of a input array, array[0..N-1] A_out[j] = SUM(A_in[k]/N*exp(+2*pi*sqrt(-1)*j*k/N), k = 0..N-1) -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/
void fftc1dinv(complex_1d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault); void fftc1dinv(complex_1d_array &a, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* 1-dimensional real FFT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS F - DFT of a input array, array[0..N-1] F[j] = SUM(A[k]*exp(-2*pi*sqrt(-1)*j*k/N), k = 0..N-1) NOTE: there is a buffered version of this function, FFTR1DBuf(), which reuses memory previously allocated for A as much as possible. NOTE: F[] satisfies symmetry property F[k] = conj(F[N-k]), so just one half of array is usually needed. But for convinience subroutine returns full complex array (with frequencies above N/2), so its result may be used by other FFT-related subroutines. -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/
void fftr1d(const real_1d_array &a, const ae_int_t n, complex_1d_array &f, const xparams _xparams = alglib::xdefault); void fftr1d(const real_1d_array &a, complex_1d_array &f, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* 1-dimensional real FFT, a buffered function which does not reallocate F[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/
void fftr1dbuf(const real_1d_array &a, const ae_int_t n, complex_1d_array &f, const xparams _xparams = alglib::xdefault); void fftr1dbuf(const real_1d_array &a, complex_1d_array &f, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional real inverse FFT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS F - array[0..floor(N/2)] - frequencies from forward real FFT N - problem size OUTPUT PARAMETERS A - inverse DFT of a input array, array[0..N-1] NOTE: there is a buffered version of this function, FFTR1DInvBuf(), which reuses memory previously allocated for A as much as possible. NOTE: F[] should satisfy symmetry property F[k] = conj(F[N-k]), so just one half of frequencies array is needed - elements from 0 to floor(N/2). F[0] is ALWAYS real. If N is even F[floor(N/2)] is real too. If N is odd, then F[floor(N/2)] has no special properties. Relying on properties noted above, FFTR1DInv subroutine uses only elements from 0th to floor(N/2)-th. It ignores imaginary part of F[0], and in case N is even it ignores imaginary part of F[floor(N/2)] too. When you call this function using full arguments list - "FFTR1DInv(F,N,A)" - you can pass either either frequencies array with N elements or reduced array with roughly N/2 elements - subroutine will successfully transform both. If you call this function using reduced arguments list - "FFTR1DInv(F,A)" - you must pass FULL array with N elements (although higher N/2 are still not used) because array size is used to automatically determine FFT length -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/
void fftr1dinv(const complex_1d_array &f, const ae_int_t n, real_1d_array &a, const xparams _xparams = alglib::xdefault); void fftr1dinv(const complex_1d_array &f, real_1d_array &a, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* 1-dimensional real inverse FFT, buffered version, which does not reallocate A[] if its length is enough to store the result (i.e. it reuses previously allocated memory as much as possible). -- ALGLIB -- Copyright 01.06.2009 by Bochkanov Sergey *************************************************************************/
void fftr1dinvbuf(const complex_1d_array &f, const ae_int_t n, real_1d_array &a, const xparams _xparams = alglib::xdefault); void fftr1dinvbuf(const complex_1d_array &f, real_1d_array &a, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "fasttransforms.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // first we demonstrate forward FFT:
        // [1i,1i,1i,1i] is converted to [4i, 0, 0, 0]
        //
        complex_1d_array z = "[1i,1i,1i,1i]";
        fftc1d(z);
        printf("%s\n", z.tostring(3).c_str()); // EXPECTED: [4i,0,0,0]

        //
        // now we convert [4i, 0, 0, 0] back to [1i,1i,1i,1i]
        // with backward FFT
        //
        fftc1dinv(z);
        printf("%s\n", z.tostring(3).c_str()); // EXPECTED: [1i,1i,1i,1i]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "fasttransforms.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // first we demonstrate forward FFT:
        // [0,1,0,1i] is converted to [1+1i, -1-1i, -1-1i, 1+1i]
        //
        complex_1d_array z = "[0,1,0,1i]";
        fftc1d(z);
        printf("%s\n", z.tostring(3).c_str()); // EXPECTED: [1+1i, -1-1i, -1-1i, 1+1i]

        //
        // now we convert result back with backward FFT
        //
        fftc1dinv(z);
        printf("%s\n", z.tostring(3).c_str()); // EXPECTED: [0,1,0,1i]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "fasttransforms.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // first we demonstrate forward FFT:
        // [1,1,1,1] is converted to [4, 0, 0, 0]
        //
        real_1d_array x = "[1,1,1,1]";
        complex_1d_array f;
        real_1d_array x2;
        fftr1d(x, f);
        printf("%s\n", f.tostring(3).c_str()); // EXPECTED: [4,0,0,0]

        //
        // now we convert [4, 0, 0, 0] back to [1,1,1,1]
        // with backward FFT
        //
        fftr1dinv(f, x2);
        printf("%s\n", x2.tostring(3).c_str()); // EXPECTED: [1,1,1,1]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "fasttransforms.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // first we demonstrate forward FFT:
        // [1,2,3,4] is converted to [10, -2+2i, -2, -2-2i]
        //
        // note that output array is self-adjoint:
        // * f[0] = conj(f[0])
        // * f[1] = conj(f[3])
        // * f[2] = conj(f[2])
        //
        real_1d_array x = "[1,2,3,4]";
        complex_1d_array f;
        real_1d_array x2;
        fftr1d(x, f);
        printf("%s\n", f.tostring(3).c_str()); // EXPECTED: [10, -2+2i, -2, -2-2i]

        //
        // now we convert [10, -2+2i, -2, -2-2i] back to [1,2,3,4]
        //
        fftr1dinv(f, x2);
        printf("%s\n", x2.tostring(3).c_str()); // EXPECTED: [1,2,3,4]

        //
        // remember that F is self-adjoint? It means that we can pass just half
        // (slightly larger than half) of F to inverse real FFT and still get our result.
        //
        // I.e. instead [10, -2+2i, -2, -2-2i] we pass just [10, -2+2i, -2] and everything works!
        //
        // NOTE: in this case we should explicitly pass array length (which is 4) to ALGLIB;
        // if not, it will automatically use array length to determine FFT size and
        // will erroneously make half-length FFT.
        //
        f = "[10, -2+2i, -2]";
        fftr1dinv(f, 4, x2);
        printf("%s\n", x2.tostring(3).c_str()); // EXPECTED: [1,2,3,4]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

fhtr1d
fhtr1dinv
/************************************************************************* 1-dimensional Fast Hartley Transform. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - real function to be transformed N - problem size OUTPUT PARAMETERS A - FHT of a input array, array[0..N-1], A_out[k] = sum(A_in[j]*(cos(2*pi*j*k/N)+sin(2*pi*j*k/N)), j=0..N-1) -- ALGLIB -- Copyright 04.06.2009 by Bochkanov Sergey *************************************************************************/
void fhtr1d(real_1d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* 1-dimensional inverse FHT. Algorithm has O(N*logN) complexity for any N (composite or prime). INPUT PARAMETERS A - array[0..N-1] - complex array to be transformed N - problem size OUTPUT PARAMETERS A - inverse FHT of a input array, array[0..N-1] -- ALGLIB -- Copyright 29.05.2009 by Bochkanov Sergey *************************************************************************/
void fhtr1dinv(real_1d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault);
filterema
filterlrma
filtersma
filters_d_ema EMA(alpha) filter
filters_d_lrma LRMA(k) filter
filters_d_sma SMA(k) filter
/************************************************************************* Filters: exponential moving averages. This filter replaces array by results of EMA(alpha) filter. EMA(alpha) is defined as filter which replaces X[] by S[]: S[0] = X[0] S[t] = alpha*X[t] + (1-alpha)*S[t-1] INPUT PARAMETERS: X - array[N], array to process. It can be larger than N, in this case only first N points are processed. N - points count, N>=0 alpha - 0<alpha<=1, smoothing parameter. OUTPUT PARAMETERS: X - array, whose first N elements were processed with EMA(alpha) NOTE 1: this function uses efficient in-place algorithm which does not allocate temporary arrays. NOTE 2: this algorithm uses BOTH previous points and current one, i.e. new value of X[i] depends on BOTH previous point and X[i] itself. NOTE 3: technical analytis users quite often work with EMA coefficient expressed in DAYS instead of fractions. If you want to calculate EMA(N), where N is a number of days, you can use alpha=2/(N+1). -- ALGLIB -- Copyright 25.10.2011 by Bochkanov Sergey *************************************************************************/
void filterema(real_1d_array &x, const ae_int_t n, const double alpha, const xparams _xparams = alglib::xdefault); void filterema(real_1d_array &x, const double alpha, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Filters: linear regression moving averages. This filter replaces array by results of LRMA(K) filter. LRMA(K) is defined as filter which, for each data point, builds linear regression model using K prevous points (point itself is included in these K points) and calculates value of this linear model at the point in question. INPUT PARAMETERS: X - array[N], array to process. It can be larger than N, in this case only first N points are processed. N - points count, N>=0 K - K>=1 (K can be larger than N , such cases will be correctly handled). Window width. K=1 corresponds to identity transformation (nothing changes). OUTPUT PARAMETERS: X - array, whose first N elements were processed with LRMA(K) NOTE 1: this function uses efficient in-place algorithm which does not allocate temporary arrays. NOTE 2: this algorithm makes only one pass through array and uses running sum to speed-up calculation of the averages. Additional measures are taken to ensure that running sum on a long sequence of zero elements will be correctly reset to zero even in the presence of round-off error. NOTE 3: this is unsymmetric version of the algorithm, which does NOT averages points after the current one. Only X[i], X[i-1], ... are used when calculating new value of X[i]. We should also note that this algorithm uses BOTH previous points and current one, i.e. new value of X[i] depends on BOTH previous point and X[i] itself. -- ALGLIB -- Copyright 25.10.2011 by Bochkanov Sergey *************************************************************************/
void filterlrma(real_1d_array &x, const ae_int_t n, const ae_int_t k, const xparams _xparams = alglib::xdefault); void filterlrma(real_1d_array &x, const ae_int_t k, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Filters: simple moving averages (unsymmetric). This filter replaces array by results of SMA(K) filter. SMA(K) is defined as filter which averages at most K previous points (previous - not points AROUND central point) - or less, in case of the first K-1 points. INPUT PARAMETERS: X - array[N], array to process. It can be larger than N, in this case only first N points are processed. N - points count, N>=0 K - K>=1 (K can be larger than N , such cases will be correctly handled). Window width. K=1 corresponds to identity transformation (nothing changes). OUTPUT PARAMETERS: X - array, whose first N elements were processed with SMA(K) NOTE 1: this function uses efficient in-place algorithm which does not allocate temporary arrays. NOTE 2: this algorithm makes only one pass through array and uses running sum to speed-up calculation of the averages. Additional measures are taken to ensure that running sum on a long sequence of zero elements will be correctly reset to zero even in the presence of round-off error. NOTE 3: this is unsymmetric version of the algorithm, which does NOT averages points after the current one. Only X[i], X[i-1], ... are used when calculating new value of X[i]. We should also note that this algorithm uses BOTH previous points and current one, i.e. new value of X[i] depends on BOTH previous point and X[i] itself. -- ALGLIB -- Copyright 25.10.2011 by Bochkanov Sergey *************************************************************************/
void filtersma(real_1d_array &x, const ae_int_t n, const ae_int_t k, const xparams _xparams = alglib::xdefault); void filtersma(real_1d_array &x, const ae_int_t k, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Here we demonstrate EMA(0.5) filtering for time series.
        //
        real_1d_array x = "[5,6,7,8]";

        //
        // Apply filter.
        // We should get [5, 5.5, 6.25, 7.125] as result
        //
        filterema(x, 0.5);
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [5,5.5,6.25,7.125]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Here we demonstrate LRMA(3) filtering for time series.
        //
        real_1d_array x = "[7,8,8,9,12,12]";

        //
        // Apply filter.
        // We should get [7.0000, 8.0000, 8.1667, 8.8333, 11.6667, 12.5000] as result
        //    
        filterlrma(x, 3);
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [7.0000,8.0000,8.1667,8.8333,11.6667,12.5000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Here we demonstrate SMA(k) filtering for time series.
        //
        real_1d_array x = "[5,6,7,8]";

        //
        // Apply filter.
        // We should get [5, 5.5, 6.5, 7.5] as result
        //
        filtersma(x, 2);
        printf("%s\n", x.tostring(4).c_str()); // EXPECTED: [5,5.5,6.5,7.5]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

fitspherels
fitspheremc
fitspheremi
fitspheremz
fitspherex
/************************************************************************* Fits least squares (LS) circle (or NX-dimensional sphere) to data (a set of points in NX-dimensional space). Least squares circle minimizes sum of squared deviations between distances from points to the center and some "candidate" radius, which is also fitted to the data. INPUT PARAMETERS: XY - array[NPoints,NX] (or larger), contains dataset. One row = one point in NX-dimensional space. NPoints - dataset size, NPoints>0 NX - space dimensionality, NX>0 (1, 2, 3, 4, 5 and so on) OUTPUT PARAMETERS: CX - central point for a sphere R - radius -- ALGLIB -- Copyright 07.05.2018 by Bochkanov Sergey *************************************************************************/
void fitspherels(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nx, real_1d_array &cx, double &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* Fits minimum circumscribed (MC) circle (or NX-dimensional sphere) to data (a set of points in NX-dimensional space). INPUT PARAMETERS: XY - array[NPoints,NX] (or larger), contains dataset. One row = one point in NX-dimensional space. NPoints - dataset size, NPoints>0 NX - space dimensionality, NX>0 (1, 2, 3, 4, 5 and so on) OUTPUT PARAMETERS: CX - central point for a sphere RHi - radius NOTE: this function is an easy-to-use wrapper around more powerful "expert" function fitspherex(). This wrapper is optimized for ease of use and stability - at the cost of somewhat lower performance (we have to use very tight stopping criteria for inner optimizer because we want to make sure that it will converge on any dataset). If you are ready to experiment with settings of "expert" function, you can achieve ~2-4x speedup over standard "bulletproof" settings. -- ALGLIB -- Copyright 14.04.2017 by Bochkanov Sergey *************************************************************************/
void fitspheremc(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nx, real_1d_array &cx, double &rhi, const xparams _xparams = alglib::xdefault);
/************************************************************************* Fits maximum inscribed circle (or NX-dimensional sphere) to data (a set of points in NX-dimensional space). INPUT PARAMETERS: XY - array[NPoints,NX] (or larger), contains dataset. One row = one point in NX-dimensional space. NPoints - dataset size, NPoints>0 NX - space dimensionality, NX>0 (1, 2, 3, 4, 5 and so on) OUTPUT PARAMETERS: CX - central point for a sphere RLo - radius NOTE: this function is an easy-to-use wrapper around more powerful "expert" function fitspherex(). This wrapper is optimized for ease of use and stability - at the cost of somewhat lower performance (we have to use very tight stopping criteria for inner optimizer because we want to make sure that it will converge on any dataset). If you are ready to experiment with settings of "expert" function, you can achieve ~2-4x speedup over standard "bulletproof" settings. -- ALGLIB -- Copyright 14.04.2017 by Bochkanov Sergey *************************************************************************/
void fitspheremi(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nx, real_1d_array &cx, double &rlo, const xparams _xparams = alglib::xdefault);
/************************************************************************* Fits minimum zone circle (or NX-dimensional sphere) to data (a set of points in NX-dimensional space). INPUT PARAMETERS: XY - array[NPoints,NX] (or larger), contains dataset. One row = one point in NX-dimensional space. NPoints - dataset size, NPoints>0 NX - space dimensionality, NX>0 (1, 2, 3, 4, 5 and so on) OUTPUT PARAMETERS: CX - central point for a sphere RLo - radius of inscribed circle RHo - radius of circumscribed circle NOTE: this function is an easy-to-use wrapper around more powerful "expert" function fitspherex(). This wrapper is optimized for ease of use and stability - at the cost of somewhat lower performance (we have to use very tight stopping criteria for inner optimizer because we want to make sure that it will converge on any dataset). If you are ready to experiment with settings of "expert" function, you can achieve ~2-4x speedup over standard "bulletproof" settings. -- ALGLIB -- Copyright 14.04.2017 by Bochkanov Sergey *************************************************************************/
void fitspheremz(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nx, real_1d_array &cx, double &rlo, double &rhi, const xparams _xparams = alglib::xdefault);
/************************************************************************* Fitting minimum circumscribed, maximum inscribed or minimum zone circles (or NX-dimensional spheres) to data (a set of points in NX-dimensional space). This is expert function which allows to tweak many parameters of underlying nonlinear solver: * stopping criteria for inner iterations * number of outer iterations You may tweak all these parameters or only some of them, leaving other ones at their default state - just specify zero value, and solver will fill it with appropriate default one. These comments also include some discussion of approach used to handle such unusual fitting problem, its stability, drawbacks of alternative methods, and convergence properties. INPUT PARAMETERS: XY - array[NPoints,NX] (or larger), contains dataset. One row = one point in NX-dimensional space. NPoints - dataset size, NPoints>0 NX - space dimensionality, NX>0 (1, 2, 3, 4, 5 and so on) ProblemType-used to encode problem type: * 0 for least squares circle * 1 for minimum circumscribed circle/sphere fitting (MC) * 2 for maximum inscribed circle/sphere fitting (MI) * 3 for minimum zone circle fitting (difference between Rhi and Rlo is minimized), denoted as MZ EpsX - stopping condition for NLC optimizer: * must be non-negative * use 0 to choose default value (1.0E-12 is used by default) * you may specify larger values, up to 1.0E-6, if you want to speed-up solver; NLC solver performs several preconditioned outer iterations, so final result typically has precision much better than EpsX. AULIts - number of outer iterations performed by NLC optimizer: * must be non-negative * use 0 to choose default value (20 is used by default) * you may specify values smaller than 20 if you want to speed up solver; 10 often results in good combination of precision and speed; sometimes you may get good results with just 6 outer iterations. Ignored for ProblemType=0. OUTPUT PARAMETERS: CX - central point for a sphere RLo - radius: * for ProblemType=2,3, radius of the inscribed sphere * for ProblemType=0 - radius of the least squares sphere * for ProblemType=1 - zero RHo - radius: * for ProblemType=1,3, radius of the circumscribed sphere * for ProblemType=0 - radius of the least squares sphere * for ProblemType=2 - zero NOTE: ON THE UNIQUENESS OF SOLUTIONS ALGLIB provides solution to several related circle fitting problems: MC (minimum circumscribed), MI (maximum inscribed) and MZ (minimum zone) fitting, LS (least squares) fitting. It is important to note that among these problems only MC and LS are convex and have unique solution independently from starting point. As for MI, it may (or may not, depending on dataset properties) have multiple solutions, and it always has one degenerate solution C=infinity which corresponds to infinitely large radius. Thus, there are no guarantees that solution to MI returned by this solver will be the best one (and no one can provide you with such guarantee because problem is NP-hard). The only guarantee you have is that this solution is locally optimal, i.e. it can not be improved by infinitesimally small tweaks in the parameters. It is also possible to "run away" to infinity when started from bad initial point located outside of point cloud (or when point cloud does not span entire circumference/surface of the sphere). Finally, MZ (minimum zone circle) stands somewhere between MC and MI in stability. It is somewhat regularized by "circumscribed" term of the merit function; however, solutions to MZ may be non-unique, and in some unlucky cases it is also possible to "run away to infinity". NOTE: ON THE NONLINEARLY CONSTRAINED PROGRAMMING APPROACH The problem formulation for MC (minimum circumscribed circle; for the sake of simplicity we omit MZ and MI here) is: [ [ ]2 ] min [ max [ XY[i]-C ] ] C [ i [ ] ] i.e. it is unconstrained nonsmooth optimization problem of finding "best" central point, with radius R being unambiguously determined from C. In order to move away from non-smoothness we use following reformulation: [ ] [ ]2 min [ R ] subject to R>=0, [ XY[i]-C ] <= R^2 C,R [ ] [ ] i.e. it becomes smooth quadratically constrained optimization problem with linear target function. Such problem statement is 100% equivalent to the original nonsmooth one, but much easier to approach. We solve it with MinNLC solver provided by ALGLIB. NOTE: ON INSTABILITY OF SEQUENTIAL LINEARIZATION APPROACH ALGLIB has nonlinearly constrained solver which proved to be stable on such problems. However, some authors proposed to linearize constraints in the vicinity of current approximation (Ci,Ri) and to get next approximate solution (Ci+1,Ri+1) as solution to linear programming problem. Obviously, LP problems are easier than nonlinearly constrained ones. Indeed, such approach to MC/MI/MZ resulted in ~10-20x increase in performance (when compared with NLC solver). However, it turned out that in some cases linearized model fails to predict correct direction for next step and tells us that we converged to solution even when we are still 2-4 digits of precision away from it. It is important that it is not failure of LP solver - it is failure of the linear model; even when solved exactly, it fails to handle subtle nonlinearities which arise near the solution. We validated it by comparing results returned by ALGLIB linear solver with that of MATLAB. In our experiments with linearization: * MC failed most often, at both realistic and synthetic datasets * MI sometimes failed, but sometimes succeeded * MZ often succeeded; our guess is that presence of two independent sets of constraints (one set for Rlo and another one for Rhi) and two terms in the target function (Rlo and Rhi) regularizes task, so when linear model fails to handle nonlinearities from Rlo, it uses Rhi as a hint (and vice versa). Because linearization approach failed to achieve stable results, we do not include it in ALGLIB. -- ALGLIB -- Copyright 14.04.2017 by Bochkanov Sergey *************************************************************************/
void fitspherex(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nx, const ae_int_t problemtype, const double epsx, const ae_int_t aulits, real_1d_array &cx, double &rlo, double &rhi, const xparams _xparams = alglib::xdefault);
fresnelintegral
/************************************************************************* Fresnel integral Evaluates the Fresnel integrals x - | | C(x) = | cos(pi/2 t**2) dt, | | - 0 x - | | S(x) = | sin(pi/2 t**2) dt. | | - 0 The integrals are evaluated by a power series for x < 1. For x >= 1 auxiliary functions f(x) and g(x) are employed such that C(x) = 0.5 + f(x) sin( pi/2 x**2 ) - g(x) cos( pi/2 x**2 ) S(x) = 0.5 - f(x) cos( pi/2 x**2 ) - g(x) sin( pi/2 x**2 ) ACCURACY: Relative error. Arithmetic function domain # trials peak rms IEEE S(x) 0, 10 10000 2.0e-15 3.2e-16 IEEE C(x) 0, 10 10000 1.8e-15 3.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 2000 by Stephen L. Moshier *************************************************************************/
void fresnelintegral(const double x, double &c, double &s, const xparams _xparams = alglib::xdefault);
gammafunction
lngamma
/************************************************************************* Gamma function Input parameters: X - argument Domain: 0 < X < 171.6 -170 < X < 0, X is not an integer. Relative error: arithmetic domain # trials peak rms IEEE -170,-33 20000 2.3e-15 3.3e-16 IEEE -33, 33 20000 9.4e-16 2.2e-16 IEEE 33, 171.6 20000 2.3e-15 3.2e-16 Cephes Math Library Release 2.8: June, 2000 Original copyright 1984, 1987, 1989, 1992, 2000 by Stephen L. Moshier Translated to AlgoPascal by Bochkanov Sergey (2005, 2006, 2007). *************************************************************************/
double gammafunction(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Natural logarithm of gamma function Input parameters: X - argument Result: logarithm of the absolute value of the Gamma(X). Output parameters: SgnGam - sign(Gamma(X)) Domain: 0 < X < 2.55e305 -2.55e305 < X < 0, X is not an integer. ACCURACY: arithmetic domain # trials peak rms IEEE 0, 3 28000 5.4e-16 1.1e-16 IEEE 2.718, 2.556e305 40000 3.5e-16 8.3e-17 The error criterion was relative when the function magnitude was greater than one but absolute when it was less than one. The following test used the relative error criterion, though at certain points the relative error could be much higher than indicated. IEEE -200, -4 10000 4.8e-16 1.3e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1989, 1992, 2000 by Stephen L. Moshier Translated to AlgoPascal by Bochkanov Sergey (2005, 2006, 2007). *************************************************************************/
double lngamma(const double x, double &sgngam, const xparams _xparams = alglib::xdefault);
gkqgenerategaussjacobi
gkqgenerategausslegendre
gkqgeneraterec
gkqlegendrecalc
gkqlegendretbl
/************************************************************************* Returns Gauss and Gauss-Kronrod nodes/weights for Gauss-Jacobi quadrature on [-1,1] with weight function W(x)=Power(1-x,Alpha)*Power(1+x,Beta). INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. Alpha - power-law coefficient, Alpha>-1 Beta - power-law coefficient, Beta>-1 OUTPUT PARAMETERS: Info - error code: * -5 no real and positive Gauss-Kronrod formula can be created for such a weight function with a given number of nodes. * -4 an error was detected when calculating weights/nodes. Alpha or Beta are too close to -1 to obtain weights/nodes with high enough accuracy, or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK * +2 OK, but quadrature rule have exterior nodes, x[0]<-1 or x[n-1]>+1 X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
void gkqgenerategaussjacobi(const ae_int_t n, const double alpha, const double beta, ae_int_t &info, real_1d_array &x, real_1d_array &wkronrod, real_1d_array &wgauss, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns Gauss and Gauss-Kronrod nodes/weights for Gauss-Legendre quadrature with N points. GKQLegendreCalc (calculation) or GKQLegendreTbl (precomputed table) is used depending on machine precision and number of nodes. INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
void gkqgenerategausslegendre(const ae_int_t n, ae_int_t &info, real_1d_array &x, real_1d_array &wkronrod, real_1d_array &wgauss, const xparams _xparams = alglib::xdefault);
/************************************************************************* Computation of nodes and weights of a Gauss-Kronrod quadrature formula The algorithm generates the N-point Gauss-Kronrod quadrature formula with weight function given by coefficients alpha and beta of a recurrence relation which generates a system of orthogonal polynomials: P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zero moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha - alpha coefficients, array[0..floor(3*K/2)]. Beta - beta coefficients, array[0..ceil(3*K/2)]. Beta[0] is not used and may be arbitrary. Beta[I]>0. Mu0 - zeroth moment of the weight function. N - number of nodes of the Gauss-Kronrod quadrature formula, N >= 3, N = 2*K+1. OUTPUT PARAMETERS: Info - error code: * -5 no real and positive Gauss-Kronrod formula can be created for such a weight function with a given number of nodes. * -4 N is too large, task may be ill conditioned - x[i]=x[i+1] found. * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 08.05.2009 by Bochkanov Sergey *************************************************************************/
void gkqgeneraterec(const real_1d_array &alpha, const real_1d_array &beta, const double mu0, const ae_int_t n, ae_int_t &info, real_1d_array &x, real_1d_array &wkronrod, real_1d_array &wgauss, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns Gauss and Gauss-Kronrod nodes for quadrature with N points. Reduction to tridiagonal eigenproblem is used. INPUT PARAMETERS: N - number of Kronrod nodes, must be odd number, >=3. OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
void gkqlegendrecalc(const ae_int_t n, ae_int_t &info, real_1d_array &x, real_1d_array &wkronrod, real_1d_array &wgauss, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns Gauss and Gauss-Kronrod nodes for quadrature with N points using pre-calculated table. Nodes/weights were computed with accuracy up to 1.0E-32 (if MPFR version of ALGLIB is used). In standard double precision accuracy reduces to something about 2.0E-16 (depending on your compiler's handling of long floating point constants). INPUT PARAMETERS: N - number of Kronrod nodes. N can be 15, 21, 31, 41, 51, 61. OUTPUT PARAMETERS: X - array[0..N-1] - array of quadrature nodes, ordered in ascending order. WKronrod - array[0..N-1] - Kronrod weights WGauss - array[0..N-1] - Gauss weights (interleaved with zeros corresponding to extended Kronrod nodes). -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
void gkqlegendretbl(const ae_int_t n, real_1d_array &x, real_1d_array &wkronrod, real_1d_array &wgauss, double &eps, const xparams _xparams = alglib::xdefault);
gqgenerategausshermite
gqgenerategaussjacobi
gqgenerategausslaguerre
gqgenerategausslegendre
gqgenerategausslobattorec
gqgenerategaussradaurec
gqgeneraterec
/************************************************************************* Returns nodes/weights for Gauss-Hermite quadrature on (-inf,+inf) with weight function W(x)=Exp(-x*x) INPUT PARAMETERS: N - number of nodes, >=1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. May be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
void gqgenerategausshermite(const ae_int_t n, ae_int_t &info, real_1d_array &x, real_1d_array &w, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns nodes/weights for Gauss-Jacobi quadrature on [-1,1] with weight function W(x)=Power(1-x,Alpha)*Power(1+x,Beta). INPUT PARAMETERS: N - number of nodes, >=1 Alpha - power-law coefficient, Alpha>-1 Beta - power-law coefficient, Beta>-1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. Alpha or Beta are too close to -1 to obtain weights/nodes with high enough accuracy, or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha/Beta was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
void gqgenerategaussjacobi(const ae_int_t n, const double alpha, const double beta, ae_int_t &info, real_1d_array &x, real_1d_array &w, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns nodes/weights for Gauss-Laguerre quadrature on [0,+inf) with weight function W(x)=Power(x,Alpha)*Exp(-x) INPUT PARAMETERS: N - number of nodes, >=1 Alpha - power-law coefficient, Alpha>-1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. Alpha is too close to -1 to obtain weights/nodes with high enough accuracy or, may be, N is too large. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N/Alpha was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
void gqgenerategausslaguerre(const ae_int_t n, const double alpha, ae_int_t &info, real_1d_array &x, real_1d_array &w, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns nodes/weights for Gauss-Legendre quadrature on [-1,1] with N nodes. INPUT PARAMETERS: N - number of nodes, >=1 OUTPUT PARAMETERS: Info - error code: * -4 an error was detected when calculating weights/nodes. N is too large to obtain weights/nodes with high enough accuracy. Try to use multiple precision version. * -3 internal eigenproblem solver hasn't converged * -1 incorrect N was passed * +1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 12.05.2009 by Bochkanov Sergey *************************************************************************/
void gqgenerategausslegendre(const ae_int_t n, ae_int_t &info, real_1d_array &x, real_1d_array &w, const xparams _xparams = alglib::xdefault);
/************************************************************************* Computation of nodes and weights for a Gauss-Lobatto quadrature formula The algorithm generates the N-point Gauss-Lobatto quadrature formula with weight function given by coefficients alpha and beta of a recurrence which generates a system of orthogonal polynomials. P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha - array[0..N-2], alpha coefficients Beta - array[0..N-2], beta coefficients. Zero-indexed element is not used, may be arbitrary. Beta[I]>0 Mu0 - zeroth moment of the weighting function. A - left boundary of the integration interval. B - right boundary of the integration interval. N - number of nodes of the quadrature formula, N>=3 (including the left and right boundary nodes). OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/
void gqgenerategausslobattorec(const real_1d_array &alpha, const real_1d_array &beta, const double mu0, const double a, const double b, const ae_int_t n, ae_int_t &info, real_1d_array &x, real_1d_array &w, const xparams _xparams = alglib::xdefault);
/************************************************************************* Computation of nodes and weights for a Gauss-Radau quadrature formula The algorithm generates the N-point Gauss-Radau quadrature formula with weight function given by the coefficients alpha and beta of a recurrence which generates a system of orthogonal polynomials. P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha - array[0..N-2], alpha coefficients. Beta - array[0..N-1], beta coefficients Zero-indexed element is not used. Beta[I]>0 Mu0 - zeroth moment of the weighting function. A - left boundary of the integration interval. N - number of nodes of the quadrature formula, N>=2 (including the left boundary node). OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/
void gqgenerategaussradaurec(const real_1d_array &alpha, const real_1d_array &beta, const double mu0, const double a, const ae_int_t n, ae_int_t &info, real_1d_array &x, real_1d_array &w, const xparams _xparams = alglib::xdefault);
/************************************************************************* Computation of nodes and weights for a Gauss quadrature formula The algorithm generates the N-point Gauss quadrature formula with weight function given by coefficients alpha and beta of a recurrence relation which generates a system of orthogonal polynomials: P-1(x) = 0 P0(x) = 1 Pn+1(x) = (x-alpha(n))*Pn(x) - beta(n)*Pn-1(x) and zeroth moment Mu0 Mu0 = integral(W(x)dx,a,b) INPUT PARAMETERS: Alpha - array[0..N-1], alpha coefficients Beta - array[0..N-1], beta coefficients Zero-indexed element is not used and may be arbitrary. Beta[I]>0. Mu0 - zeroth moment of the weight function. N - number of nodes of the quadrature formula, N>=1 OUTPUT PARAMETERS: Info - error code: * -3 internal eigenproblem solver hasn't converged * -2 Beta[i]<=0 * -1 incorrect N was passed * 1 OK X - array[0..N-1] - array of quadrature nodes, in ascending order. W - array[0..N-1] - array of quadrature weights. -- ALGLIB -- Copyright 2005-2009 by Bochkanov Sergey *************************************************************************/
void gqgeneraterec(const real_1d_array &alpha, const real_1d_array &beta, const double mu0, const ae_int_t n, ae_int_t &info, real_1d_array &x, real_1d_array &w, const xparams _xparams = alglib::xdefault);
hermitecalculate
hermitecoefficients
hermitesum
/************************************************************************* Calculation of the value of the Hermite polynomial. Parameters: n - degree, n>=0 x - argument Result: the value of the Hermite polynomial Hn at x *************************************************************************/
double hermitecalculate(const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Representation of Hn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/
void hermitecoefficients(const ae_int_t n, real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* Summation of Hermite polynomials using Clenshaw's recurrence formula. This routine calculates c[0]*H0(x) + c[1]*H1(x) + ... + c[N]*HN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Hermite polynomial at x *************************************************************************/
double hermitesum(const real_1d_array &c, const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
hqrndstate
hqrndcontinuous
hqrnddiscrete
hqrndexponential
hqrndnormal
hqrndnormal2
hqrndnormalm
hqrndnormalv
hqrndrandomize
hqrndseed
hqrnduniformi
hqrnduniformr
hqrndunit2
/************************************************************************* Portable high quality random number generator state. Initialized with HQRNDRandomize() or HQRNDSeed(). Fields: S1, S2 - seed values V - precomputed value MagicV - 'magic' value used to determine whether State structure was correctly initialized. *************************************************************************/
class hqrndstate { public: hqrndstate(); hqrndstate(const hqrndstate &rhs); hqrndstate& operator=(const hqrndstate &rhs); virtual ~hqrndstate(); };
/************************************************************************* This function generates random number from continuous distribution given by finite sample X. INPUT PARAMETERS State - high quality random number generator, must be initialized with HQRNDRandomize() or HQRNDSeed(). X - finite sample, array[N] (can be larger, in this case only leading N elements are used). THIS ARRAY MUST BE SORTED BY ASCENDING. N - number of elements to use, N>=1 RESULT this function returns random number from continuous distribution which tries to approximate X as mush as possible. min(X)<=Result<=max(X). -- ALGLIB -- Copyright 08.11.2011 by Bochkanov Sergey *************************************************************************/
double hqrndcontinuous(hqrndstate &state, const real_1d_array &x, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function generates random number from discrete distribution given by finite sample X. INPUT PARAMETERS State - high quality random number generator, must be initialized with HQRNDRandomize() or HQRNDSeed(). X - finite sample N - number of elements to use, N>=1 RESULT this function returns one of the X[i] for random i=0..N-1 -- ALGLIB -- Copyright 08.11.2011 by Bochkanov Sergey *************************************************************************/
double hqrnddiscrete(hqrndstate &state, const real_1d_array &x, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Random number generator: exponential distribution State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 11.08.2007 by Bochkanov Sergey *************************************************************************/
double hqrndexponential(hqrndstate &state, const double lambdav, const xparams _xparams = alglib::xdefault);
/************************************************************************* Random number generator: normal numbers This function generates one random number from normal distribution. Its performance is equal to that of HQRNDNormal2() State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
double hqrndnormal(hqrndstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Random number generator: normal numbers This function generates two independent random numbers from normal distribution. Its performance is equal to that of HQRNDNormal() State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
void hqrndnormal2(hqrndstate &state, double &x1, double &x2, const xparams _xparams = alglib::xdefault);
/************************************************************************* Random number generator: matrix with random entries (normal distribution) This function generates MxN random matrix. State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
void hqrndnormalm(hqrndstate &state, const ae_int_t m, const ae_int_t n, real_2d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Random number generator: vector with random entries (normal distribution) This function generates N random numbers from normal distribution. State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
void hqrndnormalv(hqrndstate &state, const ae_int_t n, real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* HQRNDState initialization with random values which come from standard RNG. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
void hqrndrandomize(hqrndstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* HQRNDState initialization with seed values -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
void hqrndseed(const ae_int_t s1, const ae_int_t s2, hqrndstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function generates random integer number in [0, N) 1. State structure must be initialized with HQRNDRandomize() or HQRNDSeed() 2. N can be any positive number except for very large numbers: * close to 2^31 on 32-bit systems * close to 2^62 on 64-bit systems An exception will be generated if N is too large. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
ae_int_t hqrnduniformi(hqrndstate &state, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function generates random real number in (0,1), not including interval boundaries State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
double hqrnduniformr(hqrndstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Random number generator: random X and Y such that X^2+Y^2=1 State structure must be initialized with HQRNDRandomize() or HQRNDSeed(). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
void hqrndunit2(hqrndstate &state, double &x, double &y, const xparams _xparams = alglib::xdefault);
incompletebeta
invincompletebeta
/************************************************************************* Incomplete beta integral Returns incomplete beta integral of the arguments, evaluated from zero to x. The function is defined as x - - | (a+b) | | a-1 b-1 ----------- | t (1-t) dt. - - | | | (a) | (b) - 0 The domain of definition is 0 <= x <= 1. In this implementation a and b are restricted to positive values. The integral from x to 1 may be obtained by the symmetry relation 1 - incbet( a, b, x ) = incbet( b, a, 1-x ). The integral is evaluated by a continued fraction expansion or, when b*x is small, by a power series. ACCURACY: Tested at uniformly distributed random points (a,b,x) with a and b in "domain" and x between 0 and 1. Relative error arithmetic domain # trials peak rms IEEE 0,5 10000 6.9e-15 4.5e-16 IEEE 0,85 250000 2.2e-13 1.7e-14 IEEE 0,1000 30000 5.3e-12 6.3e-13 IEEE 0,10000 250000 9.3e-11 7.1e-12 IEEE 0,100000 10000 8.7e-10 4.8e-11 Outputs smaller than the IEEE gradual underflow threshold were excluded from these statistics. Cephes Math Library, Release 2.8: June, 2000 Copyright 1984, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double incompletebeta(const double a, const double b, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse of imcomplete beta integral Given y, the function finds x such that incbet( a, b, x ) = y . The routine performs interval halving or Newton iterations to find the root of incbet(a,b,x) - y = 0. ACCURACY: Relative error: x a,b arithmetic domain domain # trials peak rms IEEE 0,1 .5,10000 50000 5.8e-12 1.3e-13 IEEE 0,1 .25,100 100000 1.8e-13 3.9e-15 IEEE 0,1 0,5 50000 1.1e-12 5.5e-15 With a and b constrained to half-integer or integer values: IEEE 0,1 .5,10000 50000 5.8e-12 1.1e-13 IEEE 0,1 .5,100 100000 1.7e-14 7.9e-16 With a = .5, b constrained to half-integer or integer values: IEEE 0,1 .5,10000 10000 8.3e-11 1.0e-11 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1996, 2000 by Stephen L. Moshier *************************************************************************/
double invincompletebeta(const double a, const double b, const double y, const xparams _xparams = alglib::xdefault);
idwbuilder
idwcalcbuffer
idwmodel
idwreport
idwbuildercreate
idwbuildersetalgomstab
idwbuildersetalgotextbookmodshepard
idwbuildersetalgotextbookshepard
idwbuildersetconstterm
idwbuildersetnlayers
idwbuildersetpoints
idwbuildersetuserterm
idwbuildersetzeroterm
idwcalc
idwcalc1
idwcalc2
idwcalc3
idwcalcbuf
idwcreatecalcbuffer
idwfit
idwgridcalc2v
idwgridcalc2vsubset
idwpeekprogress
idwserialize
idwtscalcbuf
idwunserialize
idw_d_mstab Simple model built with IDW-MSTAB algorithm
idw_d_serialize IDW model serialization/unserialization
/************************************************************************* Builder object used to generate IDW (Inverse Distance Weighting) model. *************************************************************************/
class idwbuilder { public: idwbuilder(); idwbuilder(const idwbuilder &rhs); idwbuilder& operator=(const idwbuilder &rhs); virtual ~idwbuilder(); };
/************************************************************************* Buffer object which is used to perform evaluation requests in the multithreaded mode (multiple threads working with same IDW object). This object should be created with idwcreatecalcbuffer(). *************************************************************************/
class idwcalcbuffer { public: idwcalcbuffer(); idwcalcbuffer(const idwcalcbuffer &rhs); idwcalcbuffer& operator=(const idwcalcbuffer &rhs); virtual ~idwcalcbuffer(); };
/************************************************************************* IDW (Inverse Distance Weighting) model object. *************************************************************************/
class idwmodel { public: idwmodel(); idwmodel(const idwmodel &rhs); idwmodel& operator=(const idwmodel &rhs); virtual ~idwmodel(); };
/************************************************************************* IDW fitting report: rmserror RMS error avgerror average error maxerror maximum error r2 coefficient of determination, R-squared, 1-RSS/TSS *************************************************************************/
class idwreport { public: idwreport(); idwreport(const idwreport &rhs); idwreport& operator=(const idwreport &rhs); virtual ~idwreport(); double rmserror; double avgerror; double maxerror; double r2; };
/************************************************************************* This subroutine creates builder object used to generate IDW model from irregularly sampled (scattered) dataset. Multidimensional scalar/vector- -valued are supported. Builder object is used to fit model to data as follows: * builder object is created with idwbuildercreate() function * dataset is added with idwbuildersetpoints() function * one of the modern IDW algorithms is chosen with either: * idwbuildersetalgomstab() - Multilayer STABilized algorithm (interpolation) Alternatively, one of the textbook algorithms can be chosen (not recommended): * idwbuildersetalgotextbookshepard() - textbook Shepard algorithm * idwbuildersetalgotextbookmodshepard()-textbook modified Shepard algorithm * finally, model construction is performed with idwfit() function. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: NX - dimensionality of the argument, NX>=1 NY - dimensionality of the function being modeled, NY>=1; NY=1 corresponds to classic scalar function, NY>=1 corresponds to vector-valued function. OUTPUT PARAMETERS: State- builder object -- ALGLIB PROJECT -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
void idwbuildercreate(const ae_int_t nx, const ae_int_t ny, idwbuilder &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets IDW model construction algorithm to the Multilayer Stabilized IDW method (IDW-MSTAB), a latest incarnation of the inverse distance weighting interpolation which fixes shortcomings of the original and modified Shepard's variants. The distinctive features of IDW-MSTAB are: 1) exact interpolation is pursued (as opposed to fitting and noise suppression) 2) improved robustness when compared with that of other algorithms: * MSTAB shows almost no strange fitting artifacts like ripples and sharp spikes (unlike N-dimensional splines and HRBFs) * MSTAB does not return function values far from the interval spanned by the dataset; say, if all your points have |f|<=1, you can be sure that model value won't deviate too much from [-1,+1] 3) good model construction time competing with that of HRBFs and bicubic splines 4) ability to work with any number of dimensions, starting from NX=1 The drawbacks of IDW-MSTAB (and all IDW algorithms in general) are: 1) dependence of the model evaluation time on the search radius 2) bad extrapolation properties, models built by this method are usually conservative in their predictions Thus, IDW-MSTAB is a good "default" option if you want to perform scattered multidimensional interpolation. Although it has its drawbacks, it is easy to use and robust, which makes it a good first step. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: State - builder object SRad - initial search radius, SRad>0 is required. A model value is obtained by "smart" averaging of the dataset points within search radius. NOTE 1: IDW interpolation can correctly handle ANY dataset, including datasets with non-distinct points. In case non-distinct points are found, an average value for this point will be calculated. NOTE 2: the memory requirements for model storage are O(NPoints*NLayers). The model construction needs twice as much memory as model storage. NOTE 3: by default 16 IDW layers are built which is enough for most cases. You can change this parameter with idwbuildersetnlayers() method. Larger values may be necessary if you need to reproduce extrafine details at distances smaller than SRad/65536. Smaller value may be necessary if you have to save memory and computing time, and ready to sacrifice some model quality. ALGORITHM DESCRIPTION ALGLIB implementation of IDW is somewhat similar to the modified Shepard's method (one with search radius R) but overcomes several of its drawbacks, namely: 1) a tendency to show stepwise behavior for uniform datasets 2) a tendency to show terrible interpolation properties for highly nonuniform datasets which often arise in geospatial tasks (function values are densely sampled across multiple separated "tracks") IDW-MSTAB method performs several passes over dataset and builds a sequence of progressively refined IDW models (layers), which starts from one with largest search radius SRad and continues to smaller search radii until required number of layers is built. Highest layers reproduce global behavior of the target function at larger distances whilst lower layers reproduce fine details at smaller distances. Each layer is an IDW model built with following modifications: * weights go to zero when distance approach to the current search radius * an additional regularizing term is added to the distance: w=1/(d^2+lambda) * an additional fictional term with unit weight and zero function value is added in order to promote continuity properties at the isolated and boundary points By default, 16 layers is built, which is enough for most cases. You can change this parameter with idwbuildersetnlayers() method. -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
void idwbuildersetalgomstab(idwbuilder &state, const double srad, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets IDW model construction algorithm to the 'textbook' modified Shepard's algorithm with user-specified search radius. IMPORTANT: we do NOT recommend using textbook IDW algorithms because they have terrible interpolation properties. Use MSTAB in all cases. INPUT PARAMETERS: State - builder object R - search radius NOTE 1: IDW interpolation can correctly handle ANY dataset, including datasets with non-distinct points. In case non-distinct points are found, an average value for this point will be calculated. -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
void idwbuildersetalgotextbookmodshepard(idwbuilder &state, const double r, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets IDW model construction algorithm to the textbook Shepard's algorithm with custom (user-specified) power parameter. IMPORTANT: we do NOT recommend using textbook IDW algorithms because they have terrible interpolation properties. Use MSTAB in all cases. INPUT PARAMETERS: State - builder object P - power parameter, P>0; good value to start with is 2.0 NOTE 1: IDW interpolation can correctly handle ANY dataset, including datasets with non-distinct points. In case non-distinct points are found, an average value for this point will be calculated. -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
void idwbuildersetalgotextbookshepard(idwbuilder &state, const double p, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets constant prior term (model value at infinity). Constant prior term is determined as mean value over dataset. INPUT PARAMETERS: S - spline builder -- ALGLIB -- Copyright 29.10.2018 by Bochkanov Sergey *************************************************************************/
void idwbuildersetconstterm(idwbuilder &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function changes number of layers used by IDW-MSTAB algorithm. The more layers you have, the finer details can be reproduced with IDW model. The less layers you have, the less memory and CPU time is consumed by the model. Memory consumption grows linearly with layers count, running time grows sub-linearly. The default number of layers is 16, which allows you to reproduce details at distance down to SRad/65536. You will rarely need to change it. INPUT PARAMETERS: State - builder object NLayers - NLayers>=1, the number of layers used by the model. -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
void idwbuildersetnlayers(idwbuilder &state, const ae_int_t nlayers, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function adds dataset to the builder object. This function overrides results of the previous calls, i.e. multiple calls of this function will result in only the last set being added. INPUT PARAMETERS: State - builder object XY - points, array[N,NX+NY]. One row corresponds to one point in the dataset. First NX elements are coordinates, next NY elements are function values. Array may be larger than specified, in this case only leading [N,NX+NY] elements will be used. N - number of points in the dataset, N>=0. -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
void idwbuildersetpoints(idwbuilder &state, const real_2d_array &xy, const ae_int_t n, const xparams _xparams = alglib::xdefault); void idwbuildersetpoints(idwbuilder &state, const real_2d_array &xy, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets prior term (model value at infinity) as user-specified value. INPUT PARAMETERS: S - spline builder V - value for user-defined prior NOTE: for vector-valued models all components of the prior are set to same user-specified value -- ALGLIB -- Copyright 29.10.2018 by Bochkanov Sergey *************************************************************************/
void idwbuildersetuserterm(idwbuilder &state, const double v, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets zero prior term (model value at infinity). INPUT PARAMETERS: S - spline builder -- ALGLIB -- Copyright 29.10.2018 by Bochkanov Sergey *************************************************************************/
void idwbuildersetzeroterm(idwbuilder &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the IDW model at the given point. This is general function which can be used for arbitrary NX (dimension of the space of arguments) and NY (dimension of the function itself). However when you have NY=1 you may find more convenient to use idwcalc1(), idwcalc2() or idwcalc3(). NOTE: this function modifies internal temporaries of the IDW model, thus IT IS NOT THREAD-SAFE! If you want to perform parallel model evaluation from the multiple threads, use idwtscalcbuf() with per- thread buffer object. INPUT PARAMETERS: S - IDW model X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. OUTPUT PARAMETERS: Y - function value, array[NY]. Y is out-parameter and will be reallocated after call to this function. In case you want to reuse previously allocated Y, you may use idwcalcbuf(), which reallocates Y only when it is too small. -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
void idwcalc(idwmodel &s, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* IDW interpolation: scalar target, 1-dimensional argument NOTE: this function modifies internal temporaries of the IDW model, thus IT IS NOT THREAD-SAFE! If you want to perform parallel model evaluation from the multiple threads, use idwtscalcbuf() with per- thread buffer object. INPUT PARAMETERS: S - IDW interpolant built with IDW builder X0 - argument value Result: IDW interpolant S(X0) -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
double idwcalc1(idwmodel &s, const double x0, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* IDW interpolation: scalar target, 2-dimensional argument NOTE: this function modifies internal temporaries of the IDW model, thus IT IS NOT THREAD-SAFE! If you want to perform parallel model evaluation from the multiple threads, use idwtscalcbuf() with per- thread buffer object. INPUT PARAMETERS: S - IDW interpolant built with IDW builder X0, X1 - argument value Result: IDW interpolant S(X0,X1) -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
double idwcalc2(idwmodel &s, const double x0, const double x1, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* IDW interpolation: scalar target, 3-dimensional argument NOTE: this function modifies internal temporaries of the IDW model, thus IT IS NOT THREAD-SAFE! If you want to perform parallel model evaluation from the multiple threads, use idwtscalcbuf() with per- thread buffer object. INPUT PARAMETERS: S - IDW interpolant built with IDW builder X0,X1,X2- argument value Result: IDW interpolant S(X0,X1,X2) -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
double idwcalc3(idwmodel &s, const double x0, const double x1, const double x2, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function calculates values of the IDW model at the given point. Same as idwcalc(), but does not reallocate Y when in is large enough to store function values. NOTE: this function modifies internal temporaries of the IDW model, thus IT IS NOT THREAD-SAFE! If you want to perform parallel model evaluation from the multiple threads, use idwtscalcbuf() with per- thread buffer object. INPUT PARAMETERS: S - IDW model X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. Y - possibly preallocated array OUTPUT PARAMETERS: Y - function value, array[NY]. Y is not reallocated when it is larger than NY. -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
void idwcalcbuf(idwmodel &s, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates buffer structure which can be used to perform parallel IDW model evaluations (with one IDW model instance being used from multiple threads, as long as different threads use different instances of buffer). This buffer object can be used with idwtscalcbuf() function (here "ts" stands for "thread-safe", "buf" is a suffix which denotes function which reuses previously allocated output space). How to use it: * create IDW model structure or load it from file * call idwcreatecalcbuffer(), once per thread working with IDW model (you should call this function only AFTER model initialization, see below for more information) * call idwtscalcbuf() from different threads, with each thread working with its own copy of buffer object. INPUT PARAMETERS S - IDW model OUTPUT PARAMETERS Buf - external buffer. IMPORTANT: buffer object should be used only with IDW model object which was used to initialize buffer. Any attempt to use buffer with different object is dangerous - you may get memory violation error because sizes of internal arrays do not fit to dimensions of the IDW structure. IMPORTANT: you should call this function only for model which was built with model builder (or unserialized from file). Sizes of some internal structures are determined only after model is built, so buffer object created before model construction stage will be useless (and any attempt to use it will result in exception). -- ALGLIB -- Copyright 22.10.2018 by Sergey Bochkanov *************************************************************************/
void idwcreatecalcbuffer(const idwmodel &s, idwcalcbuffer &buf, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function fits IDW model to the dataset using current IDW construction algorithm. A model being built and fitting report are returned. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: State - builder object OUTPUT PARAMETERS: Model - an IDW model built with current algorithm Rep - model fitting report, fields of this structure contain information about average fitting errors. NOTE: although IDW-MSTAB algorithm is an interpolation method, i.e. it tries to fit the model exactly, it can handle datasets with non- distinct points which can not be fit exactly; in such cases least- squares fitting is performed. -- ALGLIB -- Copyright 22.10.2018 by Bochkanov Sergey *************************************************************************/
void idwfit(idwbuilder &state, idwmodel &model, idwreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function calculates values of an IDW model at a regular grid, which has N0*N1 points, with Point[I,J] = (X0[I], X1[J]). Vector-valued IDW models are supported. This function returns 0.0 when: * the model is not initialized * NX<>2 ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. NOTE: Parallel processing is implemented only for modern (MSTAB) IDW's. INPUT PARAMETERS: S - IDW model, used in read-only mode, can be shared between multiple invocations of this function from multiple threads. X0 - array of grid nodes, first coordinates, array[N0]. Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N0 - grid size (number of nodes) in the first dimension, N0>=1 X1 - array of grid nodes, second coordinates, array[N1] Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N1 - grid size (number of nodes) in the second dimension, N1>=1 OUTPUT PARAMETERS: Y - function values, array[NY*N0*N1], where NY is a number of "output" vector values (this function supports vector- valued IDW models). Y is out-variable and is reallocated by this function. Y[K+NY*(I0+I1*N0)]=F_k(X0[I0],X1[I1]), for: * K=0...NY-1 * I0=0...N0-1 * I1=0...N1-1 NOTE: this function supports weakly ordered grid nodes, i.e. you may have X[i]=X[i+1] for some i. It does not provide you any performance benefits due to duplication of points, just convenience and flexibility. NOTE: this function is re-entrant, i.e. you may use same idwmodel structure in multiple threads calling this function for different grids. NOTE: if you need function values on some subset of regular grid, which may be described as "several compact and dense islands", you may use idwgridcalc2vsubset(). -- ALGLIB -- Copyright 24.11.2023 by Bochkanov Sergey *************************************************************************/
void idwgridcalc2v(const idwmodel &s, const real_1d_array &x0, const ae_int_t n0, const real_1d_array &x1, const ae_int_t n1, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of an IDW model at some subset of a regular grid: * the grid has N0*N1 points, with Point[I,J] = (X0[I], X1[J]) * only values at some subset of the grid are required Vector-valued IDW models are supported. This function returns 0.0 when: * the model is not initialized * NX<>2 ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. NOTE: Parallel processing is implemented only for modern (MSTAB) IDW's. INPUT PARAMETERS: S - IDW model, used in read-only mode, can be shared between multiple invocations of this function from multiple threads. X0 - array of grid nodes, first coordinates, array[N0]. Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N0 - grid size (number of nodes) in the first dimension, N0>=1 X1 - array of grid nodes, second coordinates, array[N1] Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N1 - grid size (number of nodes) in the second dimension, N1>=1 FlagY - array[N0*N1]: * Y[I0+I1*N0] corresponds to node (X0[I0],X1[I1]) * it is a "bitmap" array which contains False for nodes which are NOT calculated, and True for nodes which are required. OUTPUT PARAMETERS: Y - function values, array[NY*N0*N1*N2], where NY is a number of "output" vector values (this function supports vector- valued IDW models): * Y[K+NY*(I0+I1*N0)]=F_k(X0[I0],X1[I1]), for K=0...NY-1, I0=0...N0-1, I1=0...N1-1. * elements of Y[] which correspond to FlagY[]=True are loaded by model values (which may be exactly zero for some nodes). * elements of Y[] which correspond to FlagY[]=False MAY be initialized by zeros OR may be calculated. Generally, they are not calculated, but future SIMD-capable versions may compute several elements in a batch. NOTE: this function supports weakly ordered grid nodes, i.e. you may have X[i]=X[i+1] for some i. It does not provide you any performance benefits due to duplication of points, just convenience and flexibility. NOTE: this function is re-entrant, i.e. you may use same idwmodel structure in multiple threads calling this function for different grids. -- ALGLIB -- Copyright 24.11.2023 by Bochkanov Sergey *************************************************************************/
void idwgridcalc2vsubset(const idwmodel &s, const real_1d_array &x0, const ae_int_t n0, const real_1d_array &x1, const ae_int_t n1, const boolean_1d_array &flagy, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to peek into the IDW construction process from some other thread and get the progress indicator. It returns value in [0,1]. IMPORTANT: only MSTAB algorithm supports peeking into progress indicator. Legacy versions of the Shepard's method do not support it. You will always get zero as the result. INPUT PARAMETERS: S - RBF model object RESULT: progress value, in [0,1] -- ALGLIB -- Copyright 27.11.2023 by Bochkanov Sergey *************************************************************************/
double idwpeekprogress(const idwbuilder &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void idwserialize(const idwmodel &obj, std::string &s_out); void idwserialize(const idwmodel &obj, std::ostream &s_out);
/************************************************************************* This function calculates values of the IDW model at the given point, using external buffer object (internal temporaries of IDW model are not modified). This function allows to use same IDW model object in different threads, assuming that different threads use different instances of the buffer structure. INPUT PARAMETERS: S - IDW model, may be shared between different threads Buf - buffer object created for this particular instance of IDW model with idwcreatecalcbuffer(). X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. Y - possibly preallocated array OUTPUT PARAMETERS: Y - function value, array[NY]. Y is not reallocated when it is larger than NY. -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void idwtscalcbuf(const idwmodel &s, idwcalcbuffer &buf, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void idwunserialize(const std::string &s_in, idwmodel &obj); void idwunserialize(const std::istream &s_in, idwmodel &obj);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example illustrates basic concepts of the IDW models:
        // creation and evaluation.
        // 
        // Suppose that we have set of 2-dimensional points with associated
        // scalar function values, and we want to build an IDW model using
        // our data.
        // 
        // NOTE: we can work with N-dimensional models and vector-valued functions too :)
        // 
        // Typical sequence of steps is given below:
        // 1. we create IDW builder object
        // 2. we attach our dataset to the IDW builder and tune algorithm settings
        // 3. we generate IDW model
        // 4. we use IDW model instance (evaluate, serialize, etc.)
        //
        double v;

        //
        // Step 1: IDW builder creation.
        //
        // We have to specify dimensionality of the space (2 or 3) and
        // dimensionality of the function (scalar or vector).
        //
        // New builder object is empty - it has not dataset and uses
        // default model construction settings
        //
        idwbuilder builder;
        idwbuildercreate(2, 1, builder);

        //
        // Step 2: dataset addition
        //
        // XY contains two points - x0=(-1,0) and x1=(+1,0) -
        // and two function values f(x0)=2, f(x1)=3.
        //
        real_2d_array xy = "[[-1,0,2],[+1,0,3]]";
        idwbuildersetpoints(builder, xy);

        //
        // Step 3: choose IDW algorithm and generate model
        //
        // We use modified stabilized IDW algorithm with following parameters:
        // * SRad - set to 5.0 (search radius must be large enough)
        //
        // IDW-MSTAB algorithm is a state-of-the-art implementation of IDW which
        // is competitive with RBFs and bicubic splines. See comments on the
        // idwbuildersetalgomstab() function for more information.
        //
        idwmodel model;
        idwreport rep;
        idwbuildersetalgomstab(builder, 5.0);
        idwfit(builder, model, rep);

        //
        // Step 4: model was built, evaluate its value
        //
        v = idwcalc2(model, 1.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 3.000
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example shows how to serialize and unserialize IDW model.
        // 
        // Suppose that we have set of 2-dimensional points with associated
        // scalar function values, and we have built an IDW model using
        // our data.
        //
        // This model can be serialized to string or stream. ALGLIB supports
        // flexible (un)serialization, i.e. you can move serialized model
        // representation between different machines (32-bit or 64-bit),
        // different CPU architectures (x86/64, ARM) or even different
        // programming languages supported by ALGLIB (C#, C++, ...).
        //
        // Our first step is to build model, evaluate it at point (1,0),
        // and serialize it to string.
        //
        std::string s;
        double v;
        real_2d_array xy = "[[-1,0,2],[+1,0,3]]";
        idwbuilder builder;
        idwmodel model;
        idwmodel model2;
        idwreport rep;
        idwbuildercreate(2, 1, builder);
        idwbuildersetpoints(builder, xy);
        idwbuildersetalgomstab(builder, 5.0);
        idwfit(builder, model, rep);
        v = idwcalc2(model, 1.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 3.000

        //
        // Serialization + unserialization to a different instance
        // of the model class.
        //
        alglib::idwserialize(model, s);
        alglib::idwunserialize(s, model2);

        //
        // Evaluate unserialized model at the same point
        //
        v = idwcalc2(model2, 1.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 3.000
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

incompletegamma
incompletegammac
invincompletegammac
/************************************************************************* Incomplete gamma integral The function is defined by x - 1 | | -t a-1 igam(a,x) = ----- | e t dt. - | | | (a) - 0 In this implementation both arguments must be positive. The integral is evaluated by either a power series or continued fraction expansion, depending on the relative values of a and x. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,30 200000 3.6e-14 2.9e-15 IEEE 0,100 300000 9.9e-14 1.5e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double incompletegamma(const double a, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Complemented incomplete gamma integral The function is defined by igamc(a,x) = 1 - igam(a,x) inf. - 1 | | -t a-1 = ----- | e t dt. - | | | (a) - x In this implementation both arguments must be positive. The integral is evaluated by either a power series or continued fraction expansion, depending on the relative values of a and x. ACCURACY: Tested at random a, x. a x Relative error: arithmetic domain domain # trials peak rms IEEE 0.5,100 0,100 200000 1.9e-14 1.7e-15 IEEE 0.01,0.5 0,100 200000 1.4e-13 1.6e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1985, 1987, 2000 by Stephen L. Moshier *************************************************************************/
double incompletegammac(const double a, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse of complemented imcomplete gamma integral Given p, the function finds x such that igamc( a, x ) = p. Starting with the approximate value 3 x = a t where t = 1 - d - ndtri(p) sqrt(d) and d = 1/9a, the routine performs up to 10 Newton iterations to find the root of igamc(a,x) - p = 0. ACCURACY: Tested at random a, p in the intervals indicated. a p Relative error: arithmetic domain domain # trials peak rms IEEE 0.5,100 0,0.5 100000 1.0e-14 1.7e-15 IEEE 0.01,0.5 0,0.5 100000 9.0e-14 3.4e-15 IEEE 0.5,10000 0,0.5 20000 2.3e-13 3.8e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double invincompletegammac(const double a, const double y0, const xparams _xparams = alglib::xdefault);
nsfitspheremcc
nsfitspheremic
nsfitspheremzc
nsfitspherex
spline1dfitcubic
spline1dfithermite
spline1dfitpenalized
spline1dfitpenalizedw
/************************************************************************* This function is left for backward compatibility. Use fitspheremc() instead. -- ALGLIB -- Copyright 14.04.2017 by Bochkanov Sergey *************************************************************************/
void nsfitspheremcc(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nx, real_1d_array &cx, double &rhi, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is left for backward compatibility. Use fitspheremi() instead. -- ALGLIB -- Copyright 14.04.2017 by Bochkanov Sergey *************************************************************************/
void nsfitspheremic(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nx, real_1d_array &cx, double &rlo, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is left for backward compatibility. Use fitspheremz() instead. -- ALGLIB -- Copyright 14.04.2017 by Bochkanov Sergey *************************************************************************/
void nsfitspheremzc(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nx, real_1d_array &cx, double &rlo, double &rhi, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is left for backward compatibility. Use fitspherex() instead. -- ALGLIB -- Copyright 14.04.2017 by Bochkanov Sergey *************************************************************************/
void nsfitspherex(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nx, const ae_int_t problemtype, const double epsx, const ae_int_t aulits, const double penalty, real_1d_array &cx, double &rlo, double &rhi, const xparams _xparams = alglib::xdefault);
/************************************************************************* Deprecated fitting function with O(N*M^2+M^3) running time. Superseded by spline1dfit(). -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
void spline1dfitcubic(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault); void spline1dfitcubic(const real_1d_array &x, const real_1d_array &y, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Deprecated fitting function with O(N*M^2+M^3) running time. Superseded by spline1dfit(). -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
void spline1dfithermite(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault); void spline1dfithermite(const real_1d_array &x, const real_1d_array &y, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is an obsolete and deprecated version of fitting by penalized cubic spline. It was superseded by spline1dfit(), which is an orders of magnitude faster and more memory-efficient implementation. Do NOT use this function in the new code! -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
void spline1dfitpenalized(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t m, const double rho, ae_int_t &info, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault); void spline1dfitpenalized(const real_1d_array &x, const real_1d_array &y, const ae_int_t m, const double rho, ae_int_t &info, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is an obsolete and deprecated version of fitting by penalized cubic spline. It was superseded by spline1dfit(), which is an orders of magnitude faster and more memory-efficient implementation. Do NOT use this function in the new code! -- ALGLIB PROJECT -- Copyright 19.10.2010 by Bochkanov Sergey *************************************************************************/
void spline1dfitpenalizedw(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const ae_int_t n, const ae_int_t m, const double rho, ae_int_t &info, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault); void spline1dfitpenalizedw(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const ae_int_t m, const double rho, ae_int_t &info, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault);
rmatrixinvupdatecolumn
rmatrixinvupdaterow
rmatrixinvupdatesimple
rmatrixinvupdateuv
/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a vector to a column of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdColumn - the column of A whose vector U was added. 0 <= UpdColumn <= N-1 U - the vector to be added to a column. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
void rmatrixinvupdatecolumn(real_2d_array &inva, const ae_int_t n, const ae_int_t updcolumn, const real_1d_array &u, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a vector to a row of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdRow - the row of A whose vector V was added. 0 <= Row <= N-1 V - the vector to be added to a row. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
void rmatrixinvupdaterow(real_2d_array &inva, const ae_int_t n, const ae_int_t updrow, const real_1d_array &v, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm updates matrix A^-1 when adding a number to an element of matrix A. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. UpdRow - row where the element to be updated is stored. UpdColumn - column where the element to be updated is stored. UpdVal - a number to be added to the element. Output parameters: InvA - inverse of modified matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
void rmatrixinvupdatesimple(real_2d_array &inva, const ae_int_t n, const ae_int_t updrow, const ae_int_t updcolumn, const double updval, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse matrix update by the Sherman-Morrison formula The algorithm computes the inverse of matrix A+u*v' by using the given matrix A^-1 and the vectors u and v. Input parameters: InvA - inverse of matrix A. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. U - the vector modifying the matrix. Array whose index ranges within [0..N-1]. V - the vector modifying the matrix. Array whose index ranges within [0..N-1]. Output parameters: InvA - inverse of matrix A + u*v'. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
void rmatrixinvupdateuv(real_2d_array &inva, const ae_int_t n, const real_1d_array &u, const real_1d_array &v, const xparams _xparams = alglib::xdefault);
sparsesolverreport
sparsesolverstate
sparsesolvegmres
sparsesolvercreate
sparsesolverooccontinue
sparsesolveroocgetrequestdata
sparsesolveroocgetrequestdata1
sparsesolveroocgetrequestinfo
sparsesolveroocsendresult
sparsesolveroocstart
sparsesolveroocstop
sparsesolverrequesttermination
sparsesolverresults
sparsesolversetalgogmres
sparsesolversetcond
sparsesolversetstartingpoint
sparsesolversetxrep
sparsesolversolve
sparsesolversolvesymmetric
sparsesolvesymmetricgmres
/************************************************************************* This structure is a sparse solver report (both direct and iterative solvers use this structure). Following fields can be accessed by users: * TerminationType (specific error codes depend on the solver being used, with positive values ALWAYS signaling that something useful is returned in X, and negative values ALWAYS meaning critical failures. * NMV - number of matrix-vector products performed (0 for direct solvers) * IterationsCount - inner iterations count (0 for direct solvers) * R2 - squared residual *************************************************************************/
class sparsesolverreport { public: sparsesolverreport(); sparsesolverreport(const sparsesolverreport &rhs); sparsesolverreport& operator=(const sparsesolverreport &rhs); virtual ~sparsesolverreport(); ae_int_t terminationtype; ae_int_t nmv; ae_int_t iterationscount; double r2; };
/************************************************************************* This object stores state of the sparse linear solver object. You should use ALGLIB functions to work with this object. Never try to access its fields directly! *************************************************************************/
class sparsesolverstate { public: sparsesolverstate(); sparsesolverstate(const sparsesolverstate &rhs); sparsesolverstate& operator=(const sparsesolverstate &rhs); virtual ~sparsesolverstate(); };
/************************************************************************* Solving sparse linear system A*x=b using GMRES(k) method. This function provides convenience API for an 'expert' interface provided by SparseSolverState class. Use SparseSolver API if you need advanced functions like providing initial point, using out-of-core API and so on. INPUT PARAMETERS: A - sparse NxN matrix in any sparse storage format. Using CRS format is recommended because it avoids internal conversion. An exception will be generated if A is not NxN matrix (where N is a size specified during solver object creation). B - right part, array[N] K - k parameter for GMRES(k), k>=0. Zero value means that algorithm will choose it automatically. EpsF - stopping condition, EpsF>=0. The algorithm will stop when residual will decrease below EpsF*|B|. Having EpsF=0 means that this stopping condition is ignored. MaxIts - stopping condition, MaxIts>=0. The algorithm will stop after performing MaxIts iterations. Zero value means no limit. NOTE: having both EpsF=0 and MaxIts=0 means that stopping criteria will be chosen automatically. OUTPUT PARAMETERS: X - array[N], the solution Rep - solution report: * Rep.TerminationType completion code: * -5 CG method was used for a matrix which is not positive definite * -4 overflow/underflow during solution (ill conditioned problem) * 1 ||residual||<=EpsF*||b|| * 5 MaxIts steps was taken * 7 rounding errors prevent further progress, best point found is returned * 8 the algorithm was terminated early with SparseSolverRequestTermination() being called from other thread. * Rep.IterationsCount contains iterations count * Rep.NMV contains number of matrix-vector calculations * Rep.R2 contains squared residual -- ALGLIB -- Copyright 25.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolvegmres(const sparsematrix &a, const real_1d_array &b, const ae_int_t k, const double epsf, const ae_int_t maxits, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function initializes sparse linear iterative solver object. This solver can be used to solve nonsymmetric and symmetric positive definite NxN (square) linear systems. The solver provides 'expert' API which allows advanced control over algorithms being used, including ability to get progress report, terminate long-running solver from other thread, out-of-core solution and so on. NOTE: there are also convenience functions that allows quick one-line access to the solvers: * SparseSolveCG() to solve SPD linear systems * SparseSolveGMRES() to solve unsymmetric linear systems. NOTE: if you want to solve MxN (rectangular) linear problem you may use LinLSQR solver provided by ALGLIB. USAGE (A is given by the SparseMatrix structure): 1. User initializes algorithm state with SparseSolverCreate() call 2. User selects algorithm with one of the SparseSolverSetAlgo???() functions. By default, GMRES(k) is used with automatically chosen k 3. Optionally, user tunes solver parameters, sets starting point, etc. 4. Depending on whether system is symmetric or not, user calls: * SparseSolverSolveSymmetric() for a symmetric system given by its lower or upper triangle * SparseSolverSolve() for a nonsymmetric system or a symmetric one given by the full matrix 5. User calls SparseSolverResults() to get the solution It is possible to call SparseSolverSolve???() again to solve another task with same dimensionality but different matrix and/or right part without reinitializing SparseSolverState structure. USAGE (out-of-core mode): 1. User initializes algorithm state with SparseSolverCreate() call 2. User selects algorithm with one of the SparseSolverSetAlgo???() functions. By default, GMRES(k) is used with automatically chosen k 3. Optionally, user tunes solver parameters, sets starting point, etc. 4. After that user should work with out-of-core interface in a loop like one given below: > alglib.sparsesolveroocstart(state) > while alglib.sparsesolverooccontinue(state) do > alglib.sparsesolveroocgetrequestinfo(state, out RequestType) > alglib.sparsesolveroocgetrequestdata(state, out X) > if RequestType=0 then > [calculate Y=A*X, with X=R^N] > alglib.sparsesolveroocsendresult(state, in Y) > alglib.sparsesolveroocstop(state, out X, out Report) INPUT PARAMETERS: N - problem dimensionality (fixed at start-up) OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolvercreate(const ae_int_t n, sparsesolverstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs iterative solution of the linear system in the out-of-core mode. It should be used in conjunction with other out-of-core- related functions of this subspackage in a loop like one given below: > alglib.sparsesolveroocstart(state) > while alglib.sparsesolverooccontinue(state) do > alglib.sparsesolveroocgetrequestinfo(state, out RequestType) > alglib.sparsesolveroocgetrequestdata(state, out X) > if RequestType=0 then > [calculate Y=A*X, with X=R^N] > alglib.sparsesolveroocsendresult(state, in Y) > alglib.sparsesolveroocstop(state, out X, out Report) -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
bool sparsesolverooccontinue(sparsesolverstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to retrieve vector associated with out-of-core request sent by the solver to user code. Depending on the request type (returned by the SparseSolverOOCGetRequestInfo()) this vector should be multiplied by A or subjected to another processing. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like one given below: > alglib.sparsesolveroocstart(state) > while alglib.sparsesolverooccontinue(state) do > alglib.sparsesolveroocgetrequestinfo(state, out RequestType) > alglib.sparsesolveroocgetrequestdata(state, out X) > if RequestType=0 then > [calculate Y=A*X, with X=R^N] > alglib.sparsesolveroocsendresult(state, in Y) > alglib.sparsesolveroocstop(state, out X, out Report) INPUT PARAMETERS: State - solver running in out-of-core mode X - possibly preallocated storage; reallocated if needed, left unchanged, if large enough to store request data. OUTPUT PARAMETERS: X - array[N] or larger, leading N elements are filled with vector X. -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolveroocgetrequestdata(sparsesolverstate &state, real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to retrieve scalar value associated with out-of-core request sent by the solver to user code. In the current ALGLIB version this function is used to retrieve squared residual norm during progress reports. INPUT PARAMETERS: State - solver running in out-of-core mode OUTPUT PARAMETERS: V - scalar value associated with the current request -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolveroocgetrequestdata1(sparsesolverstate &state, double &v, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to retrieve information about out-of-core request sent by the solver: * RequestType=0 means that matrix-vector products A*x is requested * RequestType=-1 means that solver reports its progress; this request is returned only when reports are activated wit SparseSolverSetXRep(). This function returns just request type; in order to get contents of the trial vector, use sparsesolveroocgetrequestdata(). It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like one given below: > alglib.sparsesolveroocstart(state) > while alglib.sparsesolverooccontinue(state) do > alglib.sparsesolveroocgetrequestinfo(state, out RequestType) > alglib.sparsesolveroocgetrequestdata(state, out X) > if RequestType=0 then > [calculate Y=A*X, with X=R^N] > alglib.sparsesolveroocsendresult(state, in Y) > alglib.sparsesolveroocstop(state, out X, out Report) INPUT PARAMETERS: State - solver running in out-of-core mode OUTPUT PARAMETERS: RequestType - type of the request to process: * 0 for matrix-vector product A*x, with A being NxN system matrix and X being N-dimensional vector *-1 for location and residual report -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolveroocgetrequestinfo(sparsesolverstate &state, ae_int_t &requesttype, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to send user reply to out-of-core request sent by the solver. Usually it is product A*x for vector X returned by the solver. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like one given below: > alglib.sparsesolveroocstart(state) > while alglib.sparsesolverooccontinue(state) do > alglib.sparsesolveroocgetrequestinfo(state, out RequestType) > alglib.sparsesolveroocgetrequestdata(state, out X) > if RequestType=0 then > [calculate Y=A*X, with X=R^N] > alglib.sparsesolveroocsendresult(state, in Y) > alglib.sparsesolveroocstop(state, out X, out Report) INPUT PARAMETERS: State - solver running in out-of-core mode AX - array[N] or larger, leading N elements contain A*x -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolveroocsendresult(sparsesolverstate &state, const real_1d_array &ax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function initiates out-of-core mode of the sparse solver. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like one given below: > alglib.sparsesolveroocstart(state) > while alglib.sparsesolverooccontinue(state) do > alglib.sparsesolveroocgetrequestinfo(state, out RequestType) > alglib.sparsesolveroocgetrequestdata(state, out X) > if RequestType=0 then > [calculate Y=A*X, with X=R^N] > alglib.sparsesolveroocsendresult(state, in Y) > alglib.sparsesolveroocstop(state, out X, out Report) INPUT PARAMETERS: State - solver object -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolveroocstart(sparsesolverstate &state, const real_1d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function finalizes out-of-core mode of the linear solver. It should be used in conjunction with other out-of-core-related functions of this subspackage in a loop like one given below: > alglib.sparsesolveroocstart(state) > while alglib.sparsesolverooccontinue(state) do > alglib.sparsesolveroocgetrequestinfo(state, out RequestType) > alglib.sparsesolveroocgetrequestdata(state, out X) > if RequestType=0 then > [calculate Y=A*X, with X=R^N] > alglib.sparsesolveroocsendresult(state, in Y) > alglib.sparsesolveroocstop(state, out X, out Report) INPUT PARAMETERS: State - solver state OUTPUT PARAMETERS: X - array[N], the solution. Zero-filled on the failure (Rep.TerminationType<0). Rep - report with additional info: * Rep.TerminationType completion code: * -5 CG method was used for a matrix which is not positive definite * -4 overflow/underflow during solution (ill conditioned problem) * 1 ||residual||<=EpsF*||b|| * 5 MaxIts steps was taken * 7 rounding errors prevent further progress, best point found is returned * 8 the algorithm was terminated early with SparseSolverRequestTermination() being called from other thread. * Rep.IterationsCount contains iterations count * Rep.NMV contains number of matrix-vector calculations * Rep.R2 contains squared residual -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolveroocstop(sparsesolverstate &state, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine submits request for termination of the running solver. It can be called from some other thread which wants the solver to terminate or when processing an out-of-core request. As result, solver stops at point which was "current accepted" when the termination request was submitted and returns error code 8 (successful termination). Such termination is a smooth process which properly deallocates all temporaries. INPUT PARAMETERS: State - solver structure NOTE: calling this function on solver which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. NOTE: solver clears termination flag on its start, it means that if some other thread will request termination too soon, its request will went unnoticed. -- ALGLIB -- Copyright 01.10.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolverrequesttermination(sparsesolverstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse solver results. This function must be called after calling one of the SparseSolverSolve() functions. INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[N], solution Rep - solution report: * Rep.TerminationType completion code: * -5 CG method was used for a matrix which is not positive definite * -4 overflow/underflow during solution (ill conditioned problem) * 1 ||residual||<=EpsF*||b|| * 5 MaxIts steps was taken * 7 rounding errors prevent further progress, best point found is returned * 8 the algorithm was terminated early with SparseSolverRequestTermination() being called from other thread. * Rep.IterationsCount contains iterations count * Rep.NMV contains number of matrix-vector calculations * Rep.R2 contains squared residual s -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void sparsesolverresults(sparsesolverstate &state, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets the solver algorithm to GMRES(k). NOTE: if you do not need advanced functionality of the SparseSolver API, you may use convenience functions SparseSolveGMRES() and SparseSolveSymmetricGMRES(). INPUT PARAMETERS: State - structure which stores algorithm state K - GMRES parameter, K>=0: * recommended values are in 10..100 range * larger values up to N are possible but have little sense - the algorithm will be slower than any dense solver. * values above N are truncated down to N * zero value means that default value is chosen. This value is 50 in the current version, but it may change in future ALGLIB releases. -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolversetalgogmres(sparsesolverstate &state, const ae_int_t k, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping criteria. INPUT PARAMETERS: EpsF - algorithm will be stopped if norm of residual is less than EpsF*||b||. MaxIts - algorithm will be stopped if number of iterations is more than MaxIts. OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: If both EpsF and MaxIts are zero then small EpsF will be set to small value. -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void sparsesolversetcond(sparsesolverstate &state, const double epsf, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets starting point. By default, zero starting point is used. INPUT PARAMETERS: State - structure which stores algorithm state X - starting point, array[N] OUTPUT PARAMETERS: State - new starting point was set -- ALGLIB -- Copyright 24.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolversetstartingpoint(sparsesolverstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting during out-of-core processing. When the solver works in the out-of-core mode, it can be configured to report its progress by returning current location. These location reports are implemented as a special kind of the out-of-core request: * SparseSolverOOCGetRequestInfo() returns -1 * SparseSolverOOCGetRequestData() returns current location * SparseSolverOOCGetRequestData1() returns squared norm of the residual * SparseSolverOOCSendResult() shall NOT be called This function has no effect when SparseSolverSolve() is used because this function has no method of reporting its progress. NOTE: when used with GMRES(k), this function reports progress every k-th iteration. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not -- ALGLIB -- Copyright 01.10.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolversetxrep(sparsesolverstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Procedure for the solution of A*x=b with sparse nonsymmetric A IMPORTANT: this function will work with any solver algorithm being used, symmetric solver like CG, or not. However, using symmetric solvers on nonsymmetric problems is dangerous. It may solve the problem up to desired precision (sometimes, rarely) or terminate with error code signalling violation of underlying assumptions. INPUT PARAMETERS: State - algorithm state A - sparse NxN matrix in any sparse storage format. Using CRS format is recommended because it avoids internal conversion. An exception will be generated if A is not NxN matrix (where N is a size specified during solver object creation). B - right part, array[N] RESULT: This function returns no result. You can get the solution by calling SparseSolverResults() -- ALGLIB -- Copyright 25.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolversolve(sparsesolverstate &state, const sparsematrix &a, const real_1d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* Procedure for the solution of A*x=b with sparse symmetric A given by its lower or upper triangle. This function will work with any solver algorithm being used, SPD one (like CG) or not (like GMRES). Using unsymmetric solvers (like GMRES) on SPD problems is suboptimal, but still possible. NOTE: the solver behavior is ill-defined for a situation when a SPD solver is used on indefinite matrix. It may solve the problem up to desired precision (sometimes, rarely) or return with error code signalling violation of underlying assumptions. INPUT PARAMETERS: State - algorithm state A - sparse symmetric NxN matrix in any sparse storage format. Using CRS format is recommended because it avoids internal conversion. An exception will be generated if A is not NxN matrix (where N is a size specified during solver object creation). IsUpper - whether upper or lower triangle of A is used: * IsUpper=True => only upper triangle is used and lower triangle is not referenced at all * IsUpper=False => only lower triangle is used and upper triangle is not referenced at all B - right part, array[N] RESULT: This function returns no result. You can get the solution by calling SparseSolverResults() -- ALGLIB -- Copyright 25.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolversolvesymmetric(sparsesolverstate &state, const sparsematrix &a, const bool isupper, const real_1d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* Solving sparse symmetric linear system A*x=b using GMRES(k) method. Sparse symmetric A is given by its lower or upper triangle. NOTE: use SparseSolveGMRES() to solve system with nonsymmetric A. This function provides convenience API for an 'expert' interface provided by SparseSolverState class. Use SparseSolver API if you need advanced functions like providing initial point, using out-of-core API and so on. INPUT PARAMETERS: A - sparse symmetric NxN matrix in any sparse storage format. Using CRS format is recommended because it avoids internal conversion. An exception will be generated if A is not NxN matrix (where N is a size specified during solver object creation). IsUpper - whether upper or lower triangle of A is used: * IsUpper=True => only upper triangle is used and lower triangle is not referenced at all * IsUpper=False => only lower triangle is used and upper triangle is not referenced at all B - right part, array[N] K - k parameter for GMRES(k), k>=0. Zero value means that algorithm will choose it automatically. EpsF - stopping condition, EpsF>=0. The algorithm will stop when residual will decrease below EpsF*|B|. Having EpsF=0 means that this stopping condition is ignored. MaxIts - stopping condition, MaxIts>=0. The algorithm will stop after performing MaxIts iterations. Zero value means no limit. NOTE: having both EpsF=0 and MaxIts=0 means that stopping criteria will be chosen automatically. OUTPUT PARAMETERS: X - array[N], the solution Rep - solution report: * Rep.TerminationType completion code: * -5 CG method was used for a matrix which is not positive definite * -4 overflow/underflow during solution (ill conditioned problem) * 1 ||residual||<=EpsF*||b|| * 5 MaxIts steps was taken * 7 rounding errors prevent further progress, best point found is returned * 8 the algorithm was terminated early with SparseSolverRequestTermination() being called from other thread. * Rep.IterationsCount contains iterations count * Rep.NMV contains number of matrix-vector calculations * Rep.R2 contains squared residual -- ALGLIB -- Copyright 25.09.2021 by Bochkanov Sergey *************************************************************************/
void sparsesolvesymmetricgmres(const sparsematrix &a, const bool isupper, const real_1d_array &b, const ae_int_t k, const double epsf, const ae_int_t maxits, real_1d_array &x, sparsesolverreport &rep, const xparams _xparams = alglib::xdefault);
jacobianellipticfunctions
/************************************************************************* Jacobian Elliptic Functions Evaluates the Jacobian elliptic functions sn(u|m), cn(u|m), and dn(u|m) of parameter m between 0 and 1, and real argument u. These functions are periodic, with quarter-period on the real axis equal to the complete elliptic integral ellpk(1.0-m). Relation to incomplete elliptic integral: If u = ellik(phi,m), then sn(u|m) = sin(phi), and cn(u|m) = cos(phi). Phi is called the amplitude of u. Computation is by means of the arithmetic-geometric mean algorithm, except when m is within 1e-9 of 0 or 1. In the latter case with m close to 1, the approximation applies only for phi < pi/2. ACCURACY: Tested at random points with u between 0 and 10, m between 0 and 1. Absolute error (* = relative error): arithmetic function # trials peak rms IEEE phi 10000 9.2e-16* 1.4e-16* IEEE sn 50000 4.1e-15 4.6e-16 IEEE cn 40000 3.6e-15 4.4e-16 IEEE dn 10000 1.3e-12 1.8e-14 Peak error observed in consistency check using addition theorem for sn(u+v) was 4e-16 (absolute). Also tested by the above relation to the incomplete elliptic integral. Accuracy deteriorates when u is large. Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
void jacobianellipticfunctions(const double u, const double m, double &sn, double &cn, double &dn, double &ph, const xparams _xparams = alglib::xdefault);
jarqueberatest
/************************************************************************* Jarque-Bera test This test checks hypotheses about the fact that a given sample X is a sample of normal random variable. Requirements: * the number of elements in the sample is not less than 5. Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. N>=5 Output parameters: P - p-value for the test Accuracy of the approximation used (5<=N<=1951): p-value relative error (5<=N<=1951) [1, 0.1] < 1% [0.1, 0.01] < 2% [0.01, 0.001] < 6% [0.001, 0] wasn't measured For N>1951 accuracy wasn't measured but it shouldn't be sharply different from table values. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
void jarqueberatest(const real_1d_array &x, const ae_int_t n, double &p, const xparams _xparams = alglib::xdefault);
knnbuffer
knnbuilder
knnmodel
knnreport
knnallerrors
knnavgce
knnavgerror
knnavgrelerror
knnbuilderbuildknnmodel
knnbuildercreate
knnbuildersetdatasetcls
knnbuildersetdatasetreg
knnbuildersetnorm
knnclassify
knncreatebuffer
knnprocess
knnprocess0
knnprocessi
knnrelclserror
knnrewritekeps
knnrmserror
knnserialize
knntsprocess
knnunserialize
knn_cls Simple classification with KNN model
knn_reg Simple classification with KNN model
/************************************************************************* Buffer object which is used to perform various requests (usually model inference) in the multithreaded mode (multiple threads working with same KNN object). This object should be created with KNNCreateBuffer(). *************************************************************************/
class knnbuffer { public: knnbuffer(); knnbuffer(const knnbuffer &rhs); knnbuffer& operator=(const knnbuffer &rhs); virtual ~knnbuffer(); };
/************************************************************************* A KNN builder object; this object encapsulates dataset and all related settings, it is used to create an actual instance of KNN model. *************************************************************************/
class knnbuilder { public: knnbuilder(); knnbuilder(const knnbuilder &rhs); knnbuilder& operator=(const knnbuilder &rhs); virtual ~knnbuilder(); };
/************************************************************************* KNN model, can be used for classification or regression *************************************************************************/
class knnmodel { public: knnmodel(); knnmodel(const knnmodel &rhs); knnmodel& operator=(const knnmodel &rhs); virtual ~knnmodel(); };
/************************************************************************* KNN training report. Following fields store training set errors: * relclserror - fraction of misclassified cases, [0,1] * avgce - average cross-entropy in bits per symbol * rmserror - root-mean-square error * avgerror - average error * avgrelerror - average relative error For classification problems: * RMS, AVG and AVGREL errors are calculated for posterior probabilities For regression problems: * RELCLS and AVGCE errors are zero *************************************************************************/
class knnreport { public: knnreport(); knnreport(const knnreport &rhs); knnreport& operator=(const knnreport &rhs); virtual ~knnreport(); double relclserror; double avgce; double rmserror; double avgerror; double avgrelerror; };
/************************************************************************* Calculates all kinds of errors for the model in one call. INPUT PARAMETERS: Model - KNN model XY - test set: * one row per point * first NVars columns store independent variables * depending on problem type: * next column stores class number in [0,NClasses) - for classification problems * next NOut columns store dependent variables - for regression problems NPoints - test set size, NPoints>=0 OUTPUT PARAMETERS: Rep - following fields are loaded with errors for both regression and classification models: * rep.rmserror - RMS error for the output * rep.avgerror - average error * rep.avgrelerror - average relative error following fields are set only for classification models, zero for regression ones: * relclserror - relative classification error, in [0,1] * avgce - average cross-entropy in bits per dataset entry NOTE: the cross-entropy metric is too unstable when used to evaluate KNN models (such models can report exactly zero probabilities), so we do not recommend using it. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knnallerrors(const knnmodel &model, const real_2d_array &xy, const ae_int_t npoints, knnreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: Model - KNN model XY - test set NPoints - test set size RESULT: CrossEntropy/NPoints. Zero if model solves regression task. NOTE: the cross-entropy metric is too unstable when used to evaluate KNN models (such models can report exactly zero probabilities), so we do not recommend using it. NOTE: if you need several different kinds of error metrics, it is better to use knnallerrors() which computes all error metric with just one pass over dataset. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
double knnavgce(const knnmodel &model, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average error on the test set Its meaning for regression task is obvious. As for classification problems, average error means error when estimating posterior probabilities. INPUT PARAMETERS: Model - KNN model XY - test set NPoints - test set size RESULT: average error NOTE: if you need several different kinds of error metrics, it is better to use knnallerrors() which computes all error metric with just one pass over dataset. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
double knnavgerror(const knnmodel &model, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average relative error on the test set Its meaning for regression task is obvious. As for classification problems, average relative error means error when estimating posterior probabilities. INPUT PARAMETERS: Model - KNN model XY - test set NPoints - test set size RESULT: average relative error NOTE: if you need several different kinds of error metrics, it is better to use knnallerrors() which computes all error metric with just one pass over dataset. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
double knnavgrelerror(const knnmodel &model, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds KNN model according to current settings, using dataset internally stored in the builder object. The model being built performs inference using Eps-approximate K nearest neighbors search algorithm, with: * K=1, Eps=0 corresponding to the "nearest neighbor algorithm" * K>1, Eps=0 corresponding to the "K nearest neighbors algorithm" * K>=1, Eps>0 corresponding to "approximate nearest neighbors algorithm" An approximate KNN is a good option for high-dimensional datasets (exact KNN works slowly when dimensions count grows). An ALGLIB implementation of kd-trees is used to perform k-nn searches. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: S - KNN builder object K - number of neighbors to search for, K>=1 Eps - approximation factor: * Eps=0 means that exact kNN search is performed * Eps>0 means that (1+Eps)-approximate search is performed OUTPUT PARAMETERS: Model - KNN model Rep - report -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knnbuilderbuildknnmodel(knnbuilder &s, const ae_int_t k, const double eps, knnmodel &model, knnreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine creates KNNBuilder object which is used to train KNN models. By default, new builder stores empty dataset and some reasonable default settings. At the very least, you should specify dataset prior to building KNN model. You can also tweak settings of the model construction algorithm (recommended, although default settings should work well). Following actions are mandatory: * calling knnbuildersetdataset() to specify dataset * calling knnbuilderbuildknnmodel() to build KNN model using current dataset and default settings Additionally, you may call: * knnbuildersetnorm() to change norm being used INPUT PARAMETERS: none OUTPUT PARAMETERS: S - KNN builder -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knnbuildercreate(knnbuilder &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* Specifies classification problem (two or more classes are predicted). There also exists "regression" version of this function. This subroutine adds dense dataset to the internal storage of the builder object. Specifying your dataset in the dense format means that the dense version of the KNN construction algorithm will be invoked. INPUT PARAMETERS: S - KNN builder object XY - array[NPoints,NVars+1] (note: actual size can be larger, only leading part is used anyway), dataset: * first NVars elements of each row store values of the independent variables * next element stores class index, in [0,NClasses) NPoints - number of rows in the dataset, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: S - KNN builder -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knnbuildersetdatasetcls(knnbuilder &s, const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nclasses, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Specifies regression problem (one or more continuous output variables are predicted). There also exists "classification" version of this function. This subroutine adds dense dataset to the internal storage of the builder object. Specifying your dataset in the dense format means that the dense version of the KNN construction algorithm will be invoked. INPUT PARAMETERS: S - KNN builder object XY - array[NPoints,NVars+NOut] (note: actual size can be larger, only leading part is used anyway), dataset: * first NVars elements of each row store values of the independent variables * next NOut elements store values of the dependent variables NPoints - number of rows in the dataset, NPoints>=1 NVars - number of independent variables, NVars>=1 NOut - number of dependent variables, NOut>=1 OUTPUT PARAMETERS: S - KNN builder -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knnbuildersetdatasetreg(knnbuilder &s, const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nout, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets norm type used for neighbor search. INPUT PARAMETERS: S - decision forest builder object NormType - norm type: * 0 inf-norm * 1 1-norm * 2 Euclidean norm (default) OUTPUT PARAMETERS: S - decision forest builder -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knnbuildersetnorm(knnbuilder &s, const ae_int_t nrmtype, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns most probable class number for an input X. It is same as calling knnprocess(model,x,y), then determining i=argmax(y[i]) and returning i. A class number in [0,NOut) range in returned for classification problems, -1 is returned when this function is called for regression problems. IMPORTANT: this function is thread-unsafe and modifies internal structures of the model! You can not use same model object for parallel evaluation from several threads. Use knntsprocess() with independent thread-local buffers, if you need thread-safe evaluation. INPUT PARAMETERS: Model - KNN model X - input vector, array[0..NVars-1]. RESULT: class number, -1 for regression tasks -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
ae_int_t knnclassify(knnmodel &model, const real_1d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function creates buffer structure which can be used to perform parallel KNN requests. KNN subpackage provides two sets of computing functions - ones which use internal buffer of KNN model (these functions are single-threaded because they use same buffer, which can not shared between threads), and ones which use external buffer. This function is used to initialize external buffer. INPUT PARAMETERS Model - KNN model which is associated with newly created buffer OUTPUT PARAMETERS Buf - external buffer. IMPORTANT: buffer object should be used only with model which was used to initialize buffer. Any attempt to use buffer with different object is dangerous - you may get integrity check failure (exception) because sizes of internal arrays do not fit to dimensions of the model structure. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knncreatebuffer(const knnmodel &model, knnbuffer &buf, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inference using KNN model. See also knnprocess0(), knnprocessi() and knnclassify() for options with a bit more convenient interface. IMPORTANT: this function is thread-unsafe and modifies internal structures of the model! You can not use same model object for parallel evaluation from several threads. Use knntsprocess() with independent thread-local buffers, if you need thread-safe evaluation. INPUT PARAMETERS: Model - KNN model X - input vector, array[0..NVars-1]. Y - possible preallocated buffer. Reused if long enough. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knnprocess(knnmodel &model, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function returns first component of the inferred vector (i.e. one with index #0). It is a convenience wrapper for knnprocess() intended for either: * 1-dimensional regression problems * 2-class classification problems In the former case this function returns inference result as scalar, which is definitely more convenient that wrapping it as vector. In the latter case it returns probability of object belonging to class #0. If you call it for anything different from two cases above, it will work as defined, i.e. return y[0], although it is of less use in such cases. IMPORTANT: this function is thread-unsafe and modifies internal structures of the model! You can not use same model object for parallel evaluation from several threads. Use knntsprocess() with independent thread-local buffers, if you need thread-safe evaluation. INPUT PARAMETERS: Model - KNN model X - input vector, array[0..NVars-1]. RESULT: Y[0] -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
double knnprocess0(knnmodel &model, const real_1d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* 'interactive' variant of knnprocess() for languages like Python which support constructs like "y = knnprocessi(model,x)" and interactive mode of the interpreter. This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. IMPORTANT: this function is thread-unsafe and may modify internal structures of the model! You can not use same model object for parallel evaluation from several threads. Use knntsprocess() with independent thread-local buffers if you need thread-safe evaluation. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knnprocessi(knnmodel &model, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: Model - KNN model XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Zero if model solves regression task. NOTE: if you need several different kinds of error metrics, it is better to use knnallerrors() which computes all error metric with just one pass over dataset. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
double knnrelclserror(const knnmodel &model, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Changing search settings of KNN model. K and EPS parameters of KNN (AKNN) search are specified during model construction. However, plain KNN algorithm with Euclidean distance allows you to change them at any moment. NOTE: future versions of KNN model may support advanced versions of KNN, such as NCA or LMNN. It is possible that such algorithms won't allow you to change search settings on the fly. If you call this function for an algorithm which does not support on-the-fly changes, it will throw an exception. INPUT PARAMETERS: Model - KNN model K - K>=1, neighbors count EPS - accuracy of the EPS-approximate NN search. Set to 0.0, if you want to perform "classic" KNN search. Specify larger values if you need to speed-up high-dimensional KNN queries. OUTPUT PARAMETERS: nothing on success, exception on failure -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knnrewritekeps(knnmodel &model, const ae_int_t k, const double eps, const xparams _xparams = alglib::xdefault);
/************************************************************************* RMS error on the test set. Its meaning for regression task is obvious. As for classification problems, RMS error means error when estimating posterior probabilities. INPUT PARAMETERS: Model - KNN model XY - test set NPoints - test set size RESULT: root mean square error. NOTE: if you need several different kinds of error metrics, it is better to use knnallerrors() which computes all error metric with just one pass over dataset. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
double knnrmserror(const knnmodel &model, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void knnserialize(const knnmodel &obj, std::string &s_out); void knnserialize(const knnmodel &obj, std::ostream &s_out);
/************************************************************************* Thread-safe procesing using external buffer for temporaries. This function is thread-safe (i.e . you can use same KNN model from multiple threads) as long as you use different buffer objects for different threads. INPUT PARAMETERS: Model - KNN model Buf - buffer object, must be allocated specifically for this model with knncreatebuffer(). X - input vector, array[NVars] OUTPUT PARAMETERS: Y - result, array[NOut]. Regression estimate when solving regression task, vector of posterior probabilities for a classification task. -- ALGLIB -- Copyright 15.02.2019 by Bochkanov Sergey *************************************************************************/
void knntsprocess(const knnmodel &model, knnbuffer &buf, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void knnunserialize(const std::string &s_in, knnmodel &obj); void knnunserialize(const std::istream &s_in, knnmodel &obj);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // The very simple classification example: classify points (x,y) in 2D space
        // as ones with x>=0 and ones with x<0 (y is ignored, but our classifier
        // has to find out it).
        //
        // First, we have to create KNN builder object, load dataset and specify
        // training settings. Our dataset is specified as matrix, which has following
        // format:
        //
        //     x0 y0 class0
        //     x1 y1 class1
        //     x2 y2 class2
        //     ....
        //
        // Here xi and yi can be any values (and in fact you can have any number of
        // independent variables), and classi MUST be integer number in [0,NClasses)
        // range. In our example we denote points with x>=0 as class #0, and
        // ones with negative xi as class #1.
        //
        // NOTE: if you want to solve regression problem, specify dataset in similar
        //       format, but with dependent variable(s) instead of class labels. You
        //       can have dataset with multiple dependent variables, by the way!
        //
        // For the sake of simplicity, our example includes only 4-point dataset and
        // really simple K=1 nearest neighbor search. Industrial problems typically
        // need larger values of K.
        //
        knnbuilder builder;
        ae_int_t nvars = 2;
        ae_int_t nclasses = 2;
        ae_int_t npoints = 4;
        real_2d_array xy = "[[1,1,0],[1,-1,0],[-1,1,1],[-1,-1,1]]";

        knnbuildercreate(builder);
        knnbuildersetdatasetcls(builder, xy, npoints, nvars, nclasses);

        // we build KNN model with k=1 and eps=0 (exact k-nn search is performed)
        ae_int_t k = 1;
        double eps = 0;
        knnmodel model;
        knnreport rep;
        knnbuilderbuildknnmodel(builder, k, eps, model, rep);

        // with such settings (k=1 is used) you can expect zero classification
        // error on training set. Beautiful results, but remember - in real life
        // you do not need zero TRAINING SET error, you need good generalization.

        printf("%.4f\n", double(rep.relclserror)); // EXPECTED: 0.0000

        // now, let's perform some simple processing with knnprocess()
        real_1d_array x = "[+1,0]";
        real_1d_array y = "[]";
        knnprocess(model, x, y);
        printf("%s\n", y.tostring(3).c_str()); // EXPECTED: [+1,0]

        // another option is to use knnprocess0() which returns just first component
        // of the output vector y. ideal for regression problems and binary classifiers.
        double y0;
        y0 = knnprocess0(model, x);
        printf("%.3f\n", double(y0)); // EXPECTED: 1.000

        // finally, you can use knnclassify() which returns most probable class index (i.e. argmax y[i]).
        ae_int_t i;
        i = knnclassify(model, x);
        printf("%d\n", int(i)); // EXPECTED: 0
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // The very simple regression example: model f(x,y)=x+y
        //
        // First, we have to create KNN builder object, load dataset and specify
        // training settings. Our dataset is specified as matrix, which has following
        // format:
        //
        //     x0 y0 f0
        //     x1 y1 f1
        //     x2 y2 f2
        //     ....
        //
        // Here xi and yi can be any values, and fi is a dependent function value.
        // By the way, with KNN algorithm you can even model functions with multiple
        // dependent variables!
        //
        // NOTE: you can also solve classification problems with KNN models, see
        //       another example for this unit.
        //
        // For the sake of simplicity, our example includes only 4-point dataset and
        // really simple K=1 nearest neighbor search. Industrial problems typically
        // need larger values of K.
        //
        knnbuilder builder;
        ae_int_t nvars = 2;
        ae_int_t nout = 1;
        ae_int_t npoints = 4;
        real_2d_array xy = "[[1,1,+2],[1,-1,0],[-1,1,0],[-1,-1,-2]]";

        knnbuildercreate(builder);
        knnbuildersetdatasetreg(builder, xy, npoints, nvars, nout);

        // we build KNN model with k=1 and eps=0 (exact k-nn search is performed)
        ae_int_t k = 1;
        double eps = 0;
        knnmodel model;
        knnreport rep;
        knnbuilderbuildknnmodel(builder, k, eps, model, rep);

        // with such settings (k=1 is used) you can expect zero RMS error on the
        // training set. Beautiful results, but remember - in real life you do not
        // need zero TRAINING SET error, you need good generalization.

        printf("%.4f\n", double(rep.rmserror)); // EXPECTED: 0.0000

        // now, let's perform some simple processing with knnprocess()
        real_1d_array x = "[+1,+1]";
        real_1d_array y = "[]";
        knnprocess(model, x, y);
        printf("%s\n", y.tostring(3).c_str()); // EXPECTED: [+2]

        // another option is to use knnprocess0() which returns just first component
        // of the output vector y. ideal for regression problems and binary classifiers.
        double y0;
        y0 = knnprocess0(model, x);
        printf("%.3f\n", double(y0)); // EXPECTED: 2.000

        // there also exist another convenience function, knnclassify(),
        // but it does not work for regression problems - it always returns -1.
        ae_int_t i;
        i = knnclassify(model, x);
        printf("%d\n", int(i)); // EXPECTED: -1
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

laguerrecalculate
laguerrecoefficients
laguerresum
/************************************************************************* Calculation of the value of the Laguerre polynomial. Parameters: n - degree, n>=0 x - argument Result: the value of the Laguerre polynomial Ln at x *************************************************************************/
double laguerrecalculate(const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Representation of Ln as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/
void laguerrecoefficients(const ae_int_t n, real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* Summation of Laguerre polynomials using Clenshaw's recurrence formula. This routine calculates c[0]*L0(x) + c[1]*L1(x) + ... + c[N]*LN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Laguerre polynomial at x *************************************************************************/
double laguerresum(const real_1d_array &c, const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
fisherlda
fisherldan
/************************************************************************* Multiclass Fisher LDA The function finds coefficients of a linear combination which optimally separates training set. Most suited for 2-class problems, see fisherldan() for an variant that returns N-dimensional basis. INPUT PARAMETERS: XY - training set, array[NPoints,NVars+1]. First NVars columns store values of independent variables, the next column stores class index (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to the nearest integer. The class index must be in the [0,NClasses-1] range, an exception is generated otherwise. NPoints - training set size, NPoints>=0 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: W - linear combination coefficients, array[NVars] ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 31.05.2008 by Bochkanov Sergey *************************************************************************/
void fisherlda(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nclasses, real_1d_array &w, const xparams _xparams = alglib::xdefault); void fisherlda(const real_2d_array &xy, const ae_int_t nclasses, real_1d_array &w, const xparams _xparams = alglib::xdefault);
/************************************************************************* N-dimensional multiclass Fisher LDA Subroutine finds coefficients of linear combinations which optimally separates training set on classes. It returns N-dimensional basis whose vector are sorted by quality of training set separation (in descending order). INPUT PARAMETERS: XY - training set, array[NPoints,NVars+1]. First NVars columns store values of independent variables, the next column stores class index (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to the nearest integer. The class index must be in the [0,NClasses-1] range, an exception is generated otherwise. NPoints - training set size, NPoints>=0 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: W - basis, array[NVars,NVars] columns of matrix stores basis vectors, sorted by quality of training set separation (in descending order) ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 31.05.2008 by Bochkanov Sergey *************************************************************************/
void fisherldan(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nclasses, real_2d_array &w, const xparams _xparams = alglib::xdefault); void fisherldan(const real_2d_array &xy, const ae_int_t nclasses, real_2d_array &w, const xparams _xparams = alglib::xdefault);
legendrecalculate
legendrecoefficients
legendresum
/************************************************************************* Calculation of the value of the Legendre polynomial Pn. Parameters: n - degree, n>=0 x - argument Result: the value of the Legendre polynomial Pn at x *************************************************************************/
double legendrecalculate(const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Representation of Pn as C[0] + C[1]*X + ... + C[N]*X^N Input parameters: N - polynomial degree, n>=0 Output parameters: C - coefficients *************************************************************************/
void legendrecoefficients(const ae_int_t n, real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* Summation of Legendre polynomials using Clenshaw's recurrence formula. This routine calculates c[0]*P0(x) + c[1]*P1(x) + ... + c[N]*PN(x) Parameters: n - degree, n>=0 x - argument Result: the value of the Legendre polynomial at x *************************************************************************/
double legendresum(const real_1d_array &c, const ae_int_t n, const double x, const xparams _xparams = alglib::xdefault);
lincgreport
lincgstate
lincgcreate
lincgresults
lincgsetcond
lincgsetprecdiag
lincgsetprecunit
lincgsetrestartfreq
lincgsetrupdatefreq
lincgsetstartingpoint
lincgsetxrep
lincgsolvesparse
lincg_d_1 Solution of sparse linear systems with CG
/************************************************************************* *************************************************************************/
class lincgreport { public: lincgreport(); lincgreport(const lincgreport &rhs); lincgreport& operator=(const lincgreport &rhs); virtual ~lincgreport(); ae_int_t iterationscount; ae_int_t nmv; ae_int_t terminationtype; double r2; };
/************************************************************************* This object stores state of the linear CG method. You should use ALGLIB functions to work with this object. Never try to access its fields directly! *************************************************************************/
class lincgstate { public: lincgstate(); lincgstate(const lincgstate &rhs); lincgstate& operator=(const lincgstate &rhs); virtual ~lincgstate(); };
/************************************************************************* This function initializes linear CG Solver. This solver is used to solve symmetric positive definite problems. If you want to solve nonsymmetric (or non-positive definite) problem you may use LinLSQR solver provided by ALGLIB. USAGE: 1. User initializes algorithm state with LinCGCreate() call 2. User tunes solver parameters with LinCGSetCond() and other functions 3. Optionally, user sets starting point with LinCGSetStartingPoint() 4. User calls LinCGSolveSparse() function which takes algorithm state and SparseMatrix object. 5. User calls LinCGResults() to get solution 6. Optionally, user may call LinCGSolveSparse() again to solve another problem with different matrix and/or right part without reinitializing LinCGState structure. INPUT PARAMETERS: N - problem dimension, N>0 OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void lincgcreate(const ae_int_t n, lincgstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* CG-solver: results. This function must be called after LinCGSolve INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[N], solution Rep - optimization report: * Rep.TerminationType completetion code: * -5 input matrix is either not positive definite, too large or too small * -4 overflow/underflow during solution (ill conditioned problem) * 1 ||residual||<=EpsF*||b|| * 5 MaxIts steps was taken * 7 rounding errors prevent further progress, best point found is returned * Rep.IterationsCount contains iterations count * NMV countains number of matrix-vector calculations -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void lincgresults(const lincgstate &state, real_1d_array &x, lincgreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets stopping criteria. INPUT PARAMETERS: EpsF - algorithm will be stopped if norm of residual is less than EpsF*||b||. MaxIts - algorithm will be stopped if number of iterations is more than MaxIts. OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: If both EpsF and MaxIts are zero then small EpsF will be set to small value. -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void lincgsetcond(lincgstate &state, const double epsf, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function changes preconditioning settings of LinCGSolveSparse() function. LinCGSolveSparse() will use diagonal of the system matrix as preconditioner. This preconditioning mode is active by default. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 19.11.2012 by Bochkanov Sergey *************************************************************************/
void lincgsetprecdiag(lincgstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function changes preconditioning settings of LinCGSolveSparse() function. By default, SolveSparse() uses diagonal preconditioner, but if you want to use solver without preconditioning, you can call this function which forces solver to use unit matrix for preconditioning. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 19.11.2012 by Bochkanov Sergey *************************************************************************/
void lincgsetprecunit(lincgstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets restart frequency. By default, algorithm is restarted after N subsequent iterations. -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void lincgsetrestartfreq(lincgstate &state, const ae_int_t srf, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets frequency of residual recalculations. Algorithm updates residual r_k using iterative formula, but recalculates it from scratch after each 10 iterations. It is done to avoid accumulation of numerical errors and to stop algorithm when r_k starts to grow. Such low update frequence (1/10) gives very little overhead, but makes algorithm a bit more robust against numerical errors. However, you may change it INPUT PARAMETERS: Freq - desired update frequency, Freq>=0. Zero value means that no updates will be done. -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void lincgsetrupdatefreq(lincgstate &state, const ae_int_t freq, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets starting point. By default, zero starting point is used. INPUT PARAMETERS: X - starting point, array[N] OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void lincgsetstartingpoint(lincgstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinCGOptimize(). -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void lincgsetxrep(lincgstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Procedure for solution of A*x=b with sparse A. INPUT PARAMETERS: State - algorithm state A - sparse matrix in the CRS format (you MUST contvert it to CRS format by calling SparseConvertToCRS() function). IsUpper - whether upper or lower triangle of A is used: * IsUpper=True => only upper triangle is used and lower triangle is not referenced at all * IsUpper=False => only lower triangle is used and upper triangle is not referenced at all B - right part, array[N] RESULT: This function returns no result. You can get solution by calling LinCGResults() NOTE: this function uses lightweight preconditioning - multiplication by inverse of diag(A). If you want, you can turn preconditioning off by calling LinCGSetPrecUnit(). However, preconditioning cost is low and preconditioner is very important for solution of badly scaled problems. -- ALGLIB -- Copyright 14.11.2011 by Bochkanov Sergey *************************************************************************/
void lincgsolvesparse(lincgstate &state, const sparsematrix &a, const bool isupper, const real_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example illustrates solution of sparse linear systems with
        // conjugate gradient method.
        // 
        // Suppose that we have linear system A*x=b with sparse symmetric
        // positive definite A (represented by sparsematrix object)
        //         [ 5 1       ]
        //         [ 1 7 2     ]
        //     A = [   2 8 1   ]
        //         [     1 4 1 ]
        //         [       1 4 ]
        // and right part b
        //     [  7 ]
        //     [ 17 ]
        // b = [ 14 ]
        //     [ 10 ]
        //     [  6 ]
        // and we want to solve this system using sparse linear CG. In order
        // to do so, we have to create left part (sparsematrix object) and
        // right part (dense array).
        //
        // Initially, sparse matrix is created in the Hash-Table format,
        // which allows easy initialization, but do not allow matrix to be
        // used in the linear solvers. So after construction you should convert
        // sparse matrix to CRS format (one suited for linear operations).
        //
        // It is important to note that in our example we initialize full
        // matrix A, both lower and upper triangles. However, it is symmetric
        // and sparse solver needs just one half of the matrix. So you may
        // save about half of the space by filling only one of the triangles.
        //
        sparsematrix a;
        sparsecreate(5, 5, a);
        sparseset(a, 0, 0, 5.0);
        sparseset(a, 0, 1, 1.0);
        sparseset(a, 1, 0, 1.0);
        sparseset(a, 1, 1, 7.0);
        sparseset(a, 1, 2, 2.0);
        sparseset(a, 2, 1, 2.0);
        sparseset(a, 2, 2, 8.0);
        sparseset(a, 2, 3, 1.0);
        sparseset(a, 3, 2, 1.0);
        sparseset(a, 3, 3, 4.0);
        sparseset(a, 3, 4, 1.0);
        sparseset(a, 4, 3, 1.0);
        sparseset(a, 4, 4, 4.0);

        //
        // Now our matrix is fully initialized, but we have to do one more
        // step - convert it from Hash-Table format to CRS format (see
        // documentation on sparse matrices for more information about these
        // formats).
        //
        // If you omit this call, ALGLIB will generate exception on the first
        // attempt to use A in linear operations. 
        //
        sparseconverttocrs(a);

        //
        // Initialization of the right part
        //
        real_1d_array b = "[7,17,14,10,6]";

        //
        // Now we have to create linear solver object and to use it for the
        // solution of the linear system.
        //
        // NOTE: lincgsolvesparse() accepts additional parameter which tells
        //       what triangle of the symmetric matrix should be used - upper
        //       or lower. Because we've filled both parts of the matrix, we
        //       can use any part - upper or lower.
        //
        lincgstate s;
        lincgreport rep;
        real_1d_array x;
        lincgcreate(5, s);
        lincgsolvesparse(s, a, true, b);
        lincgresults(s, x, rep);

        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [1.000,2.000,1.000,2.000,1.000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

linlsqrreport
linlsqrstate
linlsqrcreate
linlsqrcreatebuf
linlsqrpeekiterationscount
linlsqrrequesttermination
linlsqrresults
linlsqrsetcond
linlsqrsetlambdai
linlsqrsetprecdiag
linlsqrsetprecunit
linlsqrsetxrep
linlsqrsolvesparse
linlsqr_d_1 Solution of sparse linear systems with CG
/************************************************************************* *************************************************************************/
class linlsqrreport { public: linlsqrreport(); linlsqrreport(const linlsqrreport &rhs); linlsqrreport& operator=(const linlsqrreport &rhs); virtual ~linlsqrreport(); ae_int_t iterationscount; ae_int_t nmv; ae_int_t terminationtype; };
/************************************************************************* This object stores state of the LinLSQR method. You should use ALGLIB functions to work with this object. *************************************************************************/
class linlsqrstate { public: linlsqrstate(); linlsqrstate(const linlsqrstate &rhs); linlsqrstate& operator=(const linlsqrstate &rhs); virtual ~linlsqrstate(); };
/************************************************************************* This function initializes linear LSQR Solver. This solver is used to solve non-symmetric (and, possibly, non-square) problems. Least squares solution is returned for non-compatible systems. USAGE: 1. User initializes algorithm state with LinLSQRCreate() call 2. User tunes solver parameters with LinLSQRSetCond() and other functions 3. User calls LinLSQRSolveSparse() function which takes algorithm state and SparseMatrix object. 4. User calls LinLSQRResults() to get solution 5. Optionally, user may call LinLSQRSolveSparse() again to solve another problem with different matrix and/or right part without reinitializing LinLSQRState structure. INPUT PARAMETERS: M - number of rows in A N - number of variables, N>0 OUTPUT PARAMETERS: State - structure which stores algorithm state NOTE: see also linlsqrcreatebuf() for version which reuses previously allocated place as much as possible. -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/
void linlsqrcreate(const ae_int_t m, const ae_int_t n, linlsqrstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function initializes linear LSQR Solver. It provides exactly same functionality as linlsqrcreate(), but reuses previously allocated space as much as possible. INPUT PARAMETERS: M - number of rows in A N - number of variables, N>0 OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 14.11.2018 by Bochkanov Sergey *************************************************************************/
void linlsqrcreatebuf(const ae_int_t m, const ae_int_t n, linlsqrstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to peek into LSQR solver and get current iteration counter. You can safely "peek" into the solver from another thread. INPUT PARAMETERS: S - solver object RESULT: iteration counter, in [0,INF) -- ALGLIB -- Copyright 21.05.2018 by Bochkanov Sergey *************************************************************************/
ae_int_t linlsqrpeekiterationscount(const linlsqrstate &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine submits request for termination of the running solver. It can be called from some other thread which wants LSQR solver to terminate (obviously, the thread running LSQR solver can not request termination because it is already busy working on LSQR). As result, solver stops at point which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). Such termination is a smooth process which properly deallocates all temporaries. INPUT PARAMETERS: State - solver structure NOTE: calling this function on solver which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. NOTE: solver clears termination flag on its start, it means that if some other thread will request termination too soon, its request will went unnoticed. -- ALGLIB -- Copyright 08.10.2014 by Bochkanov Sergey *************************************************************************/
void linlsqrrequesttermination(linlsqrstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* LSQR solver: results. This function must be called after LinLSQRSolve INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[N], solution Rep - optimization report: * Rep.TerminationType completetion code: * 1 ||Rk||<=EpsB*||B|| * 4 ||A^T*Rk||/(||A||*||Rk||)<=EpsA * 5 MaxIts steps was taken * 7 rounding errors prevent further progress, X contains best point found so far. (sometimes returned on singular systems) * 8 user requested termination via calling linlsqrrequesttermination() * Rep.IterationsCount contains iterations count * NMV countains number of matrix-vector calculations -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/
void linlsqrresults(const linlsqrstate &state, real_1d_array &x, linlsqrreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets stopping criteria. INPUT PARAMETERS: EpsA - algorithm will be stopped if ||A^T*Rk||/(||A||*||Rk||)<=EpsA. EpsB - algorithm will be stopped if ||Rk||<=EpsB*||B|| MaxIts - algorithm will be stopped if number of iterations more than MaxIts. OUTPUT PARAMETERS: State - structure which stores algorithm state NOTE: if EpsA,EpsB,EpsC and MaxIts are zero then these variables will be setted as default values. -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/
void linlsqrsetcond(linlsqrstate &state, const double epsa, const double epsb, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets optional Tikhonov regularization coefficient. It is zero by default. INPUT PARAMETERS: LambdaI - regularization factor, LambdaI>=0 OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/
void linlsqrsetlambdai(linlsqrstate &state, const double lambdai, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function changes preconditioning settings of LinCGSolveSparse() function. LinCGSolveSparse() will use diagonal of the system matrix as preconditioner. This preconditioning mode is active by default. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 19.11.2012 by Bochkanov Sergey *************************************************************************/
void linlsqrsetprecdiag(linlsqrstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function changes preconditioning settings of LinLSQQSolveSparse() function. By default, SolveSparse() uses diagonal preconditioner, but if you want to use solver without preconditioning, you can call this function which forces solver to use unit matrix for preconditioning. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 19.11.2012 by Bochkanov Sergey *************************************************************************/
void linlsqrsetprecunit(linlsqrstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinCGOptimize(). -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/
void linlsqrsetxrep(linlsqrstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Procedure for solution of A*x=b with sparse A. INPUT PARAMETERS: State - algorithm state A - sparse M*N matrix in the CRS format (you MUST contvert it to CRS format by calling SparseConvertToCRS() function BEFORE you pass it to this function). B - right part, array[M] RESULT: This function returns no result. You can get solution by calling LinCGResults() NOTE: this function uses lightweight preconditioning - multiplication by inverse of diag(A). If you want, you can turn preconditioning off by calling LinLSQRSetPrecUnit(). However, preconditioning cost is low and preconditioner is very important for solution of badly scaled problems. -- ALGLIB -- Copyright 30.11.2011 by Bochkanov Sergey *************************************************************************/
void linlsqrsolvesparse(linlsqrstate &state, const sparsematrix &a, const real_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "solvers.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example illustrates solution of sparse linear least squares problem
        // with LSQR algorithm.
        // 
        // Suppose that we have least squares problem min|A*x-b| with sparse A
        // represented by sparsematrix object
        //         [ 1 1 ]
        //         [ 1 1 ]
        //     A = [ 2 1 ]
        //         [ 1   ]
        //         [   1 ]
        // and right part b
        //     [ 4 ]
        //     [ 2 ]
        // b = [ 4 ]
        //     [ 1 ]
        //     [ 2 ]
        // and we want to solve this system in the least squares sense using
        // LSQR algorithm. In order to do so, we have to create left part
        // (sparsematrix object) and right part (dense array).
        //
        // Initially, sparse matrix is created in the Hash-Table format,
        // which allows easy initialization, but do not allow matrix to be
        // used in the linear solvers. So after construction you should convert
        // sparse matrix to CRS format (one suited for linear operations).
        //
        sparsematrix a;
        sparsecreate(5, 2, a);
        sparseset(a, 0, 0, 1.0);
        sparseset(a, 0, 1, 1.0);
        sparseset(a, 1, 0, 1.0);
        sparseset(a, 1, 1, 1.0);
        sparseset(a, 2, 0, 2.0);
        sparseset(a, 2, 1, 1.0);
        sparseset(a, 3, 0, 1.0);
        sparseset(a, 4, 1, 1.0);

        //
        // Now our matrix is fully initialized, but we have to do one more
        // step - convert it from Hash-Table format to CRS format (see
        // documentation on sparse matrices for more information about these
        // formats).
        //
        // If you omit this call, ALGLIB will generate exception on the first
        // attempt to use A in linear operations. 
        //
        sparseconverttocrs(a);

        //
        // Initialization of the right part
        //
        real_1d_array b = "[4,2,4,1,2]";

        //
        // Now we have to create linear solver object and to use it for the
        // solution of the linear system.
        //
        linlsqrstate s;
        linlsqrreport rep;
        real_1d_array x;
        linlsqrcreate(5, 2, s);
        linlsqrsolvesparse(s, a, b);
        linlsqrresults(s, x, rep);

        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 4
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [1.000,2.000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

linearmodel
lrreport
lravgerror
lravgrelerror
lrbuild
lrbuilds
lrbuildz
lrbuildzs
lrpack
lrprocess
lrrmserror
lrunpack
linreg_d_basic Linear regression used to build the very basic model and unpack coefficients
/************************************************************************* *************************************************************************/
class linearmodel { public: linearmodel(); linearmodel(const linearmodel &rhs); linearmodel& operator=(const linearmodel &rhs); virtual ~linearmodel(); };
/************************************************************************* LRReport structure contains additional information about linear model: * C - covariation matrix, array[0..NVars,0..NVars]. C[i,j] = Cov(A[i],A[j]) * RMSError - root mean square error on a training set * AvgError - average error on a training set * AvgRelError - average relative error on a training set (excluding observations with zero function value). * CVRMSError - leave-one-out cross-validation estimate of generalization error. Calculated using fast algorithm with O(NVars*NPoints) complexity. * CVAvgError - cross-validation estimate of average error * CVAvgRelError - cross-validation estimate of average relative error All other fields of the structure are intended for internal use and should not be used outside ALGLIB. *************************************************************************/
class lrreport { public: lrreport(); lrreport(const lrreport &rhs); lrreport& operator=(const lrreport &rhs); virtual ~lrreport(); real_2d_array c; double rmserror; double avgerror; double avgrelerror; double cvrmserror; double cvavgerror; double cvavgrelerror; ae_int_t ncvdefects; integer_1d_array cvdefects; };
/************************************************************************* Average error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: average error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
double lravgerror(const linearmodel &lm, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: average relative error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
double lravgrelerror(const linearmodel &lm, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Linear regression Subroutine builds model: Y = A(0)*X[0] + ... + A(N-1)*X[N-1] + A(N) and model found in ALGLIB format, covariation matrix, training set errors (rms, average, average relative) and leave-one-out cross-validation estimate of the generalization error. CV estimate calculated using fast algorithm with O(NPoints*NVars) complexity. When covariation matrix is calculated standard deviations of function values are assumed to be equal to RMS error on the training set. INPUT PARAMETERS: XY - training set, array [0..NPoints-1,0..NVars]: * NVars columns - independent variables * last column - dependent variable NPoints - training set size, NPoints>NVars+1. An exception is generated otherwise. NVars - number of independent variables OUTPUT PARAMETERS: LM - linear model in the ALGLIB format. Use subroutines of this unit to work with the model. Rep - additional results, see comments on LRReport structure. -- ALGLIB -- Copyright 02.08.2008 by Bochkanov Sergey *************************************************************************/
void lrbuild(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, linearmodel &lm, lrreport &rep, const xparams _xparams = alglib::xdefault); void lrbuild(const real_2d_array &xy, linearmodel &lm, lrreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Linear regression Variant of LRBuild which uses vector of standatd deviations (errors in function values). INPUT PARAMETERS: XY - training set, array [0..NPoints-1,0..NVars]: * NVars columns - independent variables * last column - dependent variable S - standard deviations (errors in function values) array[NPoints], S[i]>0. NPoints - training set size, NPoints>NVars+1 NVars - number of independent variables OUTPUT PARAMETERS: LM - linear model in the ALGLIB format. Use subroutines of this unit to work with the model. Rep - additional results, see comments on LRReport structure. -- ALGLIB -- Copyright 02.08.2008 by Bochkanov Sergey *************************************************************************/
void lrbuilds(const real_2d_array &xy, const real_1d_array &s, const ae_int_t npoints, const ae_int_t nvars, linearmodel &lm, lrreport &rep, const xparams _xparams = alglib::xdefault); void lrbuilds(const real_2d_array &xy, const real_1d_array &s, linearmodel &lm, lrreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like LRBuild but builds model Y = A(0)*X[0] + ... + A(N-1)*X[N-1] i.e. with zero constant term. -- ALGLIB -- Copyright 30.10.2008 by Bochkanov Sergey *************************************************************************/
void lrbuildz(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, linearmodel &lm, lrreport &rep, const xparams _xparams = alglib::xdefault); void lrbuildz(const real_2d_array &xy, linearmodel &lm, lrreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like LRBuildS, but builds model Y = A(0)*X[0] + ... + A(N-1)*X[N-1] i.e. with zero constant term. -- ALGLIB -- Copyright 30.10.2008 by Bochkanov Sergey *************************************************************************/
void lrbuildzs(const real_2d_array &xy, const real_1d_array &s, const ae_int_t npoints, const ae_int_t nvars, linearmodel &lm, lrreport &rep, const xparams _xparams = alglib::xdefault); void lrbuildzs(const real_2d_array &xy, const real_1d_array &s, linearmodel &lm, lrreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* "Packs" coefficients and creates linear model in ALGLIB format (LRUnpack reversed). INPUT PARAMETERS: V - coefficients, array[0..NVars] NVars - number of independent variables OUTPUT PAREMETERS: LM - linear model. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
void lrpack(const real_1d_array &v, const ae_int_t nvars, linearmodel &lm, const xparams _xparams = alglib::xdefault); void lrpack(const real_1d_array &v, linearmodel &lm, const xparams _xparams = alglib::xdefault);
/************************************************************************* Procesing INPUT PARAMETERS: LM - linear model X - input vector, array[0..NVars-1]. Result: value of linear model regression estimate -- ALGLIB -- Copyright 03.09.2008 by Bochkanov Sergey *************************************************************************/
double lrprocess(const linearmodel &lm, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - linear model XY - test set NPoints - test set size RESULT: root mean square error. -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
double lrrmserror(const linearmodel &lm, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacks coefficients of linear model. INPUT PARAMETERS: LM - linear model in ALGLIB format OUTPUT PARAMETERS: V - coefficients, array[0..NVars] constant term (intercept) is stored in the V[NVars]. NVars - number of independent variables (one less than number of coefficients) -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
void lrunpack(const linearmodel &lm, real_1d_array &v, ae_int_t &nvars, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // In this example we demonstrate linear fitting by f(x|a) = a*exp(0.5*x).
        //
        // We have:
        // * xy - matrix of basic function values (exp(0.5*x)) and expected values
        //
        real_2d_array xy = "[[0.606531,1.133719],[0.670320,1.306522],[0.740818,1.504604],[0.818731,1.554663],[0.904837,1.884638],[1.000000,2.072436],[1.105171,2.257285],[1.221403,2.534068],[1.349859,2.622017],[1.491825,2.897713],[1.648721,3.219371]]";
        ae_int_t nvars;
        linearmodel model;
        lrreport rep;
        real_1d_array c;

        lrbuildz(xy, 11, 1, model, rep);
        lrunpack(model, c, nvars);
        printf("%s\n", c.tostring(4).c_str()); // EXPECTED: [1.98650,0.00000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

logitmodel
mnlreport
mnlavgce
mnlavgerror
mnlavgrelerror
mnlclserror
mnlpack
mnlprocess
mnlprocessi
mnlrelclserror
mnlrmserror
mnltrainh
mnlunpack
/************************************************************************* *************************************************************************/
class logitmodel { public: logitmodel(); logitmodel(const logitmodel &rhs); logitmodel& operator=(const logitmodel &rhs); virtual ~logitmodel(); };
/************************************************************************* MNLReport structure contains information about training process: * NGrad - number of gradient calculations * NHess - number of Hessian calculations *************************************************************************/
class mnlreport { public: mnlreport(); mnlreport(const mnlreport &rhs); mnlreport& operator=(const mnlreport &rhs); virtual ~mnlreport(); ae_int_t ngrad; ae_int_t nhess; };
/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*ln(2)). -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
double mnlavgce(logitmodel &lm, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: average error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
double mnlavgerror(logitmodel &lm, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average relative error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: average relative error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
double mnlavgrelerror(logitmodel &lm, const real_2d_array &xy, const ae_int_t ssize, const xparams _xparams = alglib::xdefault);
/************************************************************************* Classification error on test set = MNLRelClsError*NPoints -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
ae_int_t mnlclserror(logitmodel &lm, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* "Packs" coefficients and creates logit model in ALGLIB format (MNLUnpack reversed). INPUT PARAMETERS: A - model (see MNLUnpack) NVars - number of independent variables NClasses - number of classes OUTPUT PARAMETERS: LM - logit model. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
void mnlpack(const real_2d_array &a, const ae_int_t nvars, const ae_int_t nclasses, logitmodel &lm, const xparams _xparams = alglib::xdefault);
/************************************************************************* Procesing INPUT PARAMETERS: LM - logit model, passed by non-constant reference (some fields of structure are used as temporaries when calculating model output). X - input vector, array[0..NVars-1]. Y - (possibly) preallocated buffer; if size of Y is less than NClasses, it will be reallocated.If it is large enough, it is NOT reallocated, so we can save some time on reallocation. OUTPUT PARAMETERS: Y - result, array[0..NClasses-1] Vector of posterior probabilities for classification task. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
void mnlprocess(logitmodel &lm, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* 'interactive' variant of MNLProcess for languages like Python which support constructs like "Y = MNLProcess(LM,X)" and interactive mode of the interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
void mnlprocessi(logitmodel &lm, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
double mnlrelclserror(logitmodel &lm, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* RMS error on the test set INPUT PARAMETERS: LM - logit model XY - test set NPoints - test set size RESULT: root mean square error (error when estimating posterior probabilities). -- ALGLIB -- Copyright 30.08.2008 by Bochkanov Sergey *************************************************************************/
double mnlrmserror(logitmodel &lm, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine trains logit model. INPUT PARAMETERS: XY - training set, array[0..NPoints-1,0..NVars] First NVars columns store values of independent variables, next column stores number of class (from 0 to NClasses-1) which dataset element belongs to. Fractional values are rounded to nearest integer. NPoints - training set size, NPoints>=1 NVars - number of independent variables, NVars>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<NVars+2, NVars<1, NClasses<2). * 1, if task has been solved LM - model built Rep - training report -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
void mnltrainh(const real_2d_array &xy, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nclasses, ae_int_t &info, logitmodel &lm, mnlreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacks coefficients of logit model. Logit model have form: P(class=i) = S(i) / (S(0) + S(1) + ... +S(M-1)) S(i) = Exp(A[i,0]*X[0] + ... + A[i,N-1]*X[N-1] + A[i,N]), when i<M-1 S(M-1) = 1 INPUT PARAMETERS: LM - logit model in ALGLIB format OUTPUT PARAMETERS: V - coefficients, array[0..NClasses-2,0..NVars] NVars - number of independent variables NClasses - number of classes -- ALGLIB -- Copyright 10.09.2008 by Bochkanov Sergey *************************************************************************/
void mnlunpack(const logitmodel &lm, real_2d_array &a, ae_int_t &nvars, ae_int_t &nclasses, const xparams _xparams = alglib::xdefault);
barycentricfitreport
lsfitreport
lsfitstate
polynomialfitreport
barycentricfitfloaterhormann
barycentricfitfloaterhormannwc
logisticcalc4
logisticcalc5
logisticfit4
logisticfit45x
logisticfit4ec
logisticfit5
logisticfit5ec
lsfitcreatef
lsfitcreatefg
lsfitcreatewf
lsfitcreatewfg
lsfitfit
lsfititeration
lsfitlinear
lsfitlinearc
lsfitlinearw
lsfitlinearwc
lsfitresults
lsfitsetbc
lsfitsetcond
lsfitsetgradientcheck
lsfitsetlc
lsfitsetnonmonotonicsteps
lsfitsetnumdiff
lsfitsetscale
lsfitsetstpmax
lsfitsetxrep
lstfitpiecewiselinearrdp
lstfitpiecewiselinearrdpfixed
polynomialfit
polynomialfitwc
spline1dfitcubicwc
spline1dfithermitedeprecated
spline1dfithermitewc
lsfit_d_lin Unconstrained (general) linear least squares fitting with and without weights
lsfit_d_linc Constrained (general) linear least squares fitting with and without weights
lsfit_d_nlf Nonlinear fitting using function value only
lsfit_d_nlfb Bound contstrained nonlinear fitting using function value only
lsfit_d_nlfg Nonlinear fitting using gradient
lsfit_d_nlscale Nonlinear fitting with custom scaling and bound constraints
lsfit_d_pol Unconstrained polynomial fitting
lsfit_d_polc Constrained polynomial fitting
lsfit_d_spline Unconstrained fitting by penalized regression spline
lsfit_t_4pl 4-parameter logistic fitting
lsfit_t_5pl 5-parameter logistic fitting
/************************************************************************* Barycentric fitting report: TerminationType completion code: >0 for success, <0 for failure RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error TaskRCond reciprocal of task's condition number *************************************************************************/
class barycentricfitreport { public: barycentricfitreport(); barycentricfitreport(const barycentricfitreport &rhs); barycentricfitreport& operator=(const barycentricfitreport &rhs); virtual ~barycentricfitreport(); ae_int_t terminationtype; double taskrcond; ae_int_t dbest; double rmserror; double avgerror; double avgrelerror; double maxerror; };
/************************************************************************* Least squares fitting report. This structure contains informational fields which are set by fitting functions provided by this unit. Different functions initialize different sets of fields, so you should read documentation on specific function you used in order to know which fields are initialized. TerminationType filled by all solvers: * positive values, usually 1, denote success * negative values denote various failure scenarios TaskRCond reciprocal of task's condition number IterationsCount number of internal iterations VarIdx if user-supplied gradient contains errors which were detected by nonlinear fitter, this field is set to index of the first component of gradient which is suspected to be spoiled by bugs. RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error WRMSError weighted RMS error CovPar covariance matrix for parameters, filled by some solvers ErrPar vector of errors in parameters, filled by some solvers ErrCurve vector of fit errors - variability of the best-fit curve, filled by some solvers. Noise vector of per-point noise estimates, filled by some solvers. R2 coefficient of determination (non-weighted, non-adjusted), filled by some solvers. *************************************************************************/
class lsfitreport { public: lsfitreport(); lsfitreport(const lsfitreport &rhs); lsfitreport& operator=(const lsfitreport &rhs); virtual ~lsfitreport(); ae_int_t terminationtype; double taskrcond; ae_int_t iterationscount; ae_int_t varidx; double rmserror; double avgerror; double avgrelerror; double maxerror; double wrmserror; real_2d_array covpar; real_1d_array errpar; real_1d_array errcurve; real_1d_array noise; double r2; };
/************************************************************************* Nonlinear fitter. You should use ALGLIB functions to work with fitter. Never try to access its fields directly! *************************************************************************/
class lsfitstate { public: lsfitstate(); lsfitstate(const lsfitstate &rhs); lsfitstate& operator=(const lsfitstate &rhs); virtual ~lsfitstate(); };
/************************************************************************* Polynomial fitting report: TerminationType completion code: >0 for success, <0 for failure TaskRCond reciprocal of task's condition number RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error *************************************************************************/
class polynomialfitreport { public: polynomialfitreport(); polynomialfitreport(const polynomialfitreport &rhs); polynomialfitreport& operator=(const polynomialfitreport &rhs); virtual ~polynomialfitreport(); ae_int_t terminationtype; double taskrcond; double rmserror; double avgerror; double avgrelerror; double maxerror; };
/************************************************************************* Rational least squares fitting using Floater-Hormann rational functions with optimal D chosen from [0,9]. Equidistant grid with M node on [min(x),max(x)] is used to build basis functions. Different values of D are tried, optimal D (least root mean square error) is chosen. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2) (mostly dominated by the least squares solver). INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. N - number of points, N>0. M - number of basis functions ( = number_of_nodes), M>=2. OUTPUT PARAMETERS: B - barycentric interpolant. Rep - fitting report. The following fields are set: * Rep.TerminationType is a completion code, always set to 1 * DBest best value of the D parameter * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
void barycentricfitfloaterhormann(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t m, barycentricinterpolant &b, barycentricfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Weghted rational least squares fitting using Floater-Hormann rational functions with optimal D chosen from [0,9], with constraints and individual weights. Equidistant grid with M node on [min(x),max(x)] is used to build basis functions. Different values of D are tried, optimal D (least WEIGHTED root mean square error) is chosen. Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2) (mostly dominated by the least squares solver). SEE ALSO * BarycentricFitFloaterHormann(), "lightweight" fitting without invididual weights and constraints. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. XC - points where function values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions ( = number_of_nodes), M>=2. OUTPUT PARAMETERS: B - barycentric interpolant. Undefined for rep.terminationtype<0. Rep - fitting report. The following fields are set: * Rep.TerminationType is a completion code: * set to 1 on success * set to -3 on failure due to problematic constraints: either too many constraints, degenerate constraints or inconsistent constraints were passed * DBest best value of the D parameter * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroutine doesn't calculate task's condition number for K<>0. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained barycentric interpolants: * excessive constraints can be inconsistent. Floater-Hormann basis functions aren't as flexible as splines (although they are very smooth). * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints IS NOT GUARANTEED. * in the several special cases, however, we CAN guarantee consistency. * one of this cases is constraints on the function VALUES at the interval boundaries. Note that consustency of the constraints on the function DERIVATIVES is NOT guaranteed (you can use in such cases cubic splines which are more flexible). * another special case is ONE constraint on the function value (OR, but not AND, derivative) anywhere in the interval Our final recommendation is to use constraints WHEN AND ONLY WHEN you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
void barycentricfitfloaterhormannwc(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const ae_int_t n, const real_1d_array &xc, const real_1d_array &yc, const integer_1d_array &dc, const ae_int_t k, const ae_int_t m, barycentricinterpolant &b, barycentricfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates value of four-parameter logistic (4PL) model at specified point X. 4PL model has following form: F(x|A,B,C,D) = D+(A-D)/(1+Power(x/C,B)) INPUT PARAMETERS: X - current point, X>=0: * zero X is correctly handled even for B<=0 * negative X results in exception. A, B, C, D- parameters of 4PL model: * A is unconstrained * B is unconstrained; zero or negative values are handled correctly. * C>0, non-positive value results in exception * D is unconstrained RESULT: model value at X NOTE: if B=0, denominator is assumed to be equal to 2.0 even for zero X (strictly speaking, 0^0 is undefined). NOTE: this function also throws exception if all input parameters are correct, but overflow was detected during calculations. NOTE: this function performs a lot of checks; if you need really high performance, consider evaluating model yourself, without checking for degenerate cases. -- ALGLIB PROJECT -- Copyright 14.05.2014 by Bochkanov Sergey *************************************************************************/
double logisticcalc4(const double x, const double a, const double b, const double c, const double d, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function calculates value of five-parameter logistic (5PL) model at specified point X. 5PL model has following form: F(x|A,B,C,D,G) = D+(A-D)/Power(1+Power(x/C,B),G) INPUT PARAMETERS: X - current point, X>=0: * zero X is correctly handled even for B<=0 * negative X results in exception. A, B, C, D, G- parameters of 5PL model: * A is unconstrained * B is unconstrained; zero or negative values are handled correctly. * C>0, non-positive value results in exception * D is unconstrained * G>0, non-positive value results in exception RESULT: model value at X NOTE: if B=0, denominator is assumed to be equal to Power(2.0,G) even for zero X (strictly speaking, 0^0 is undefined). NOTE: this function also throws exception if all input parameters are correct, but overflow was detected during calculations. NOTE: this function performs a lot of checks; if you need really high performance, consider evaluating model yourself, without checking for degenerate cases. -- ALGLIB PROJECT -- Copyright 14.05.2014 by Bochkanov Sergey *************************************************************************/
double logisticcalc5(const double x, const double a, const double b, const double c, const double d, const double g, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function fits four-parameter logistic (4PL) model to data provided by user. 4PL model has following form: F(x|A,B,C,D) = D+(A-D)/(1+Power(x/C,B)) Here: * A, D - unconstrained (see LogisticFit4EC() for constrained 4PL) * B>=0 * C>0 IMPORTANT: output of this function is constrained in such way that B>0. Because 4PL model is symmetric with respect to B, there is no need to explore B<0. Constraining B makes algorithm easier to stabilize and debug. Users who for some reason prefer to work with negative B's should transform output themselves (swap A and D, replace B by -B). 4PL fitting is implemented as follows: * we perform small number of restarts from random locations which helps to solve problem of bad local extrema. Locations are only partially random - we use input data to determine good initial guess, but we include controlled amount of randomness. * we perform Levenberg-Marquardt fitting with very tight constraints on parameters B and C - it allows us to find good initial guess for the second stage without risk of running into "flat spot". * second Levenberg-Marquardt round is performed without excessive constraints. Results from the previous round are used as initial guess. * after fitting is done, we compare results with best values found so far, rewrite "best solution" if needed, and move to next random location. Overall algorithm is very stable and is not prone to bad local extrema. Furthermore, it automatically scales when input data have very large or very small range. INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. OUTPUT PARAMETERS: A, B, C, D- parameters of 4PL model Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: for stability reasons the B parameter is restricted by [1/1000,1000] range. It prevents algorithm from making trial steps deep into the area of bad parameters. NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc4() function. NOTE: if you need better control over fitting process than provided by this function, you may use LogisticFit45X(). NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/
void logisticfit4(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, double &a, double &b, double &c, double &d, lsfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This is "expert" 4PL/5PL fitting function, which can be used if you need better control over fitting process than provided by LogisticFit4() or LogisticFit5(). This function fits model of the form F(x|A,B,C,D) = D+(A-D)/(1+Power(x/C,B)) (4PL model) or F(x|A,B,C,D,G) = D+(A-D)/Power(1+Power(x/C,B),G) (5PL model) Here: * A, D - unconstrained * B>=0 for 4PL, unconstrained for 5PL * C>0 * G>0 (if present) INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. CnstrLeft- optional equality constraint for model value at the left boundary (at X=0). Specify NAN (Not-a-Number) if you do not need constraint on the model value at X=0 (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. CnstrRight- optional equality constraint for model value at X=infinity. Specify NAN (Not-a-Number) if you do not need constraint on the model value (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. Is4PL - whether 4PL or 5PL models are fitted LambdaV - regularization coefficient, LambdaV>=0. Set it to zero unless you know what you are doing. EpsX - stopping condition (step size), EpsX>=0. Zero value means that small step is automatically chosen. See notes below for more information. RsCnt - number of repeated restarts from random points. 4PL/5PL models are prone to problem of bad local extrema. Utilizing multiple random restarts allows us to improve algorithm convergence. RsCnt>=0. Zero value means that function automatically choose small amount of restarts (recommended). OUTPUT PARAMETERS: A, B, C, D- parameters of 4PL model G - parameter of 5PL model; for Is4PL=True, G=1 is returned. Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: for better stability B parameter is restricted by [+-1/1000,+-1000] range, and G is restricted by [1/10,10] range. It prevents algorithm from making trial steps deep into the area of bad parameters. NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc5() function. NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. EQUALITY CONSTRAINTS ON PARAMETERS 4PL/5PL solver supports equality constraints on model values at the left boundary (X=0) and right boundary (X=infinity). These constraints are completely optional and you can specify both of them, only one - or no constraints at all. Parameter CnstrLeft contains left constraint (or NAN for unconstrained fitting), and CnstrRight contains right one. For 4PL, left constraint ALWAYS corresponds to parameter A, and right one is ALWAYS constraint on D. That's because 4PL model is normalized in such way that B>=0. For 5PL model things are different. Unlike 4PL one, 5PL model is NOT symmetric with respect to change in sign of B. Thus, negative B's are possible, and left constraint may constrain parameter A (for positive B's) - or parameter D (for negative B's). Similarly changes meaning of right constraint. You do not have to decide what parameter to constrain - algorithm will automatically determine correct parameters as fitting progresses. However, question highlighted above is important when you interpret fitting results. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/
void logisticfit45x(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const double cnstrleft, const double cnstrright, const bool is4pl, const double lambdav, const double epsx, const ae_int_t rscnt, double &a, double &b, double &c, double &d, double &g, lsfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function fits four-parameter logistic (4PL) model to data provided by user, with optional constraints on parameters A and D. 4PL model has following form: F(x|A,B,C,D) = D+(A-D)/(1+Power(x/C,B)) Here: * A, D - with optional equality constraints * B>=0 * C>0 IMPORTANT: output of this function is constrained in such way that B>0. Because 4PL model is symmetric with respect to B, there is no need to explore B<0. Constraining B makes algorithm easier to stabilize and debug. Users who for some reason prefer to work with negative B's should transform output themselves (swap A and D, replace B by -B). 4PL fitting is implemented as follows: * we perform small number of restarts from random locations which helps to solve problem of bad local extrema. Locations are only partially random - we use input data to determine good initial guess, but we include controlled amount of randomness. * we perform Levenberg-Marquardt fitting with very tight constraints on parameters B and C - it allows us to find good initial guess for the second stage without risk of running into "flat spot". * second Levenberg-Marquardt round is performed without excessive constraints. Results from the previous round are used as initial guess. * after fitting is done, we compare results with best values found so far, rewrite "best solution" if needed, and move to next random location. Overall algorithm is very stable and is not prone to bad local extrema. Furthermore, it automatically scales when input data have very large or very small range. INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. CnstrLeft- optional equality constraint for model value at the left boundary (at X=0). Specify NAN (Not-a-Number) if you do not need constraint on the model value at X=0 (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. CnstrRight- optional equality constraint for model value at X=infinity. Specify NAN (Not-a-Number) if you do not need constraint on the model value (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. OUTPUT PARAMETERS: A, B, C, D- parameters of 4PL model Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: for stability reasons the B parameter is restricted by [1/1000,1000] range. It prevents algorithm from making trial steps deep into the area of bad parameters. NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc4() function. NOTE: if you need better control over fitting process than provided by this function, you may use LogisticFit45X(). NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. EQUALITY CONSTRAINTS ON PARAMETERS 4PL/5PL solver supports equality constraints on model values at the left boundary (X=0) and right boundary (X=infinity). These constraints are completely optional and you can specify both of them, only one - or no constraints at all. Parameter CnstrLeft contains left constraint (or NAN for unconstrained fitting), and CnstrRight contains right one. For 4PL, left constraint ALWAYS corresponds to parameter A, and right one is ALWAYS constraint on D. That's because 4PL model is normalized in such way that B>=0. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/
void logisticfit4ec(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const double cnstrleft, const double cnstrright, double &a, double &b, double &c, double &d, lsfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function fits five-parameter logistic (5PL) model to data provided by user. 5PL model has following form: F(x|A,B,C,D,G) = D+(A-D)/Power(1+Power(x/C,B),G) Here: * A, D - unconstrained * B - unconstrained * C>0 * G>0 IMPORTANT: unlike in 4PL fitting, output of this function is NOT constrained in such way that B is guaranteed to be positive. Furthermore, unlike 4PL, 5PL model is NOT symmetric with respect to B, so you can NOT transform model to equivalent one, with B having desired sign (>0 or <0). 5PL fitting is implemented as follows: * we perform small number of restarts from random locations which helps to solve problem of bad local extrema. Locations are only partially random - we use input data to determine good initial guess, but we include controlled amount of randomness. * we perform Levenberg-Marquardt fitting with very tight constraints on parameters B and C - it allows us to find good initial guess for the second stage without risk of running into "flat spot". Parameter G is fixed at G=1. * second Levenberg-Marquardt round is performed without excessive constraints on B and C, but with G still equal to 1. Results from the previous round are used as initial guess. * third Levenberg-Marquardt round relaxes constraints on G and tries two different models - one with B>0 and one with B<0. * after fitting is done, we compare results with best values found so far, rewrite "best solution" if needed, and move to next random location. Overall algorithm is very stable and is not prone to bad local extrema. Furthermore, it automatically scales when input data have very large or very small range. INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. OUTPUT PARAMETERS: A,B,C,D,G- parameters of 5PL model Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: for better stability B parameter is restricted by [+-1/1000,+-1000] range, and G is restricted by [1/10,10] range. It prevents algorithm from making trial steps deep into the area of bad parameters. NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc5() function. NOTE: if you need better control over fitting process than provided by this function, you may use LogisticFit45X(). NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/
void logisticfit5(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, double &a, double &b, double &c, double &d, double &g, lsfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function fits five-parameter logistic (5PL) model to data provided by user, subject to optional equality constraints on parameters A and D. 5PL model has following form: F(x|A,B,C,D,G) = D+(A-D)/Power(1+Power(x/C,B),G) Here: * A, D - with optional equality constraints * B - unconstrained * C>0 * G>0 IMPORTANT: unlike in 4PL fitting, output of this function is NOT constrained in such way that B is guaranteed to be positive. Furthermore, unlike 4PL, 5PL model is NOT symmetric with respect to B, so you can NOT transform model to equivalent one, with B having desired sign (>0 or <0). 5PL fitting is implemented as follows: * we perform small number of restarts from random locations which helps to solve problem of bad local extrema. Locations are only partially random - we use input data to determine good initial guess, but we include controlled amount of randomness. * we perform Levenberg-Marquardt fitting with very tight constraints on parameters B and C - it allows us to find good initial guess for the second stage without risk of running into "flat spot". Parameter G is fixed at G=1. * second Levenberg-Marquardt round is performed without excessive constraints on B and C, but with G still equal to 1. Results from the previous round are used as initial guess. * third Levenberg-Marquardt round relaxes constraints on G and tries two different models - one with B>0 and one with B<0. * after fitting is done, we compare results with best values found so far, rewrite "best solution" if needed, and move to next random location. Overall algorithm is very stable and is not prone to bad local extrema. Furthermore, it automatically scales when input data have very large or very small range. INPUT PARAMETERS: X - array[N], stores X-values. MUST include only non-negative numbers (but may include zero values). Can be unsorted. Y - array[N], values to fit. N - number of points. If N is less than length of X/Y, only leading N elements are used. CnstrLeft- optional equality constraint for model value at the left boundary (at X=0). Specify NAN (Not-a-Number) if you do not need constraint on the model value at X=0 (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. CnstrRight- optional equality constraint for model value at X=infinity. Specify NAN (Not-a-Number) if you do not need constraint on the model value (in C++ you can pass alglib::fp_nan as parameter, in C# it will be Double.NaN). See below, section "EQUALITY CONSTRAINTS" for more information about constraints. OUTPUT PARAMETERS: A,B,C,D,G- parameters of 5PL model Rep - fitting report. This structure has many fields, but ONLY ONES LISTED BELOW ARE SET: * Rep.IterationsCount - number of iterations performed * Rep.RMSError - root-mean-square error * Rep.AvgError - average absolute error * Rep.AvgRelError - average relative error (calculated for non-zero Y-values) * Rep.MaxError - maximum absolute error * Rep.R2 - coefficient of determination, R-squared. This coefficient is calculated as R2=1-RSS/TSS (in case of nonlinear regression there are multiple ways to define R2, each of them giving different results). NOTE: for better stability B parameter is restricted by [+-1/1000,+-1000] range, and G is restricted by [1/10,10] range. It prevents algorithm from making trial steps deep into the area of bad parameters. NOTE: after you obtained coefficients, you can evaluate model with LogisticCalc5() function. NOTE: if you need better control over fitting process than provided by this function, you may use LogisticFit45X(). NOTE: step is automatically scaled according to scale of parameters being fitted before we compare its length with EpsX. Thus, this function can be used to fit data with very small or very large values without changing EpsX. EQUALITY CONSTRAINTS ON PARAMETERS 5PL solver supports equality constraints on model values at the left boundary (X=0) and right boundary (X=infinity). These constraints are completely optional and you can specify both of them, only one - or no constraints at all. Parameter CnstrLeft contains left constraint (or NAN for unconstrained fitting), and CnstrRight contains right one. Unlike 4PL one, 5PL model is NOT symmetric with respect to change in sign of B. Thus, negative B's are possible, and left constraint may constrain parameter A (for positive B's) - or parameter D (for negative B's). Similarly changes meaning of right constraint. You do not have to decide what parameter to constrain - algorithm will automatically determine correct parameters as fitting progresses. However, question highlighted above is important when you interpret fitting results. -- ALGLIB PROJECT -- Copyright 14.02.2014 by Bochkanov Sergey *************************************************************************/
void logisticfit5ec(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const double cnstrleft, const double cnstrright, double &a, double &b, double &c, double &d, double &g, lsfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Nonlinear least squares fitting using function values only. Combination of numerical differentiation and secant updates is used to obtain function Jacobian. Nonlinear task min(F(c)) is solved, where F(c) = (f(c,x[0])-y[0])^2 + ... + (f(c,x[n-1])-y[n-1])^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]). INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted DiffStep- numerical differentiation step, >0; Obviously, step size should not be too large in order to get a good numerical derivative. However, it also should not be too small because numerical errors are greatly amplified by numerical differentiation. By default, symmetric 3-point formula which provides good accuracy is used. It can be changed to a faster but less precise 2-point one with minlmsetnumdiff() function. OUTPUT PARAMETERS: State - structure which stores algorithm state IMPORTANT: the LSFIT optimizer supports parallel model evaluation and parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates fits with large datasets and/or expensive target functions. Callback parallelism is usually beneficial when a single pass over the entire dataset requires more than several milliseconds. See ALGLIB Reference Manual, 'Working with commercial version' section, and comments on lsfitfit() function for more information. -- ALGLIB -- Copyright 18.10.2008 by Bochkanov Sergey *************************************************************************/
void lsfitcreatef(const real_2d_array &x, const real_1d_array &y, const real_1d_array &c, const ae_int_t n, const ae_int_t m, const ae_int_t k, const double diffstep, lsfitstate &state, const xparams _xparams = alglib::xdefault); void lsfitcreatef(const real_2d_array &x, const real_1d_array &y, const real_1d_array &c, const double diffstep, lsfitstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* Nonlinear least squares fitting using gradient only, without individual weights. Nonlinear task min(F(c)) is solved, where F(c) = ((f(c,x[0])-y[0]))^2 + ... + ((f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]) and its gradient. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted OUTPUT PARAMETERS: State - structure which stores algorithm state IMPORTANT: the LSFIT optimizer supports parallel model evaluation and parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates fits with large datasets and/or expensive target functions. Callback parallelism is usually beneficial when a single pass over the entire dataset requires more than several milliseconds. See ALGLIB Reference Manual, 'Working with commercial version' section, and comments on lsfitfit() function for more information. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void lsfitcreatefg(const real_2d_array &x, const real_1d_array &y, const real_1d_array &c, const ae_int_t n, const ae_int_t m, const ae_int_t k, lsfitstate &state, const xparams _xparams = alglib::xdefault); void lsfitcreatefg(const real_2d_array &x, const real_1d_array &y, const real_1d_array &c, lsfitstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Weighted nonlinear least squares fitting using function values only. Combination of numerical differentiation and secant updates is used to obtain function Jacobian. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(c,x[0])-y[0]))^2 + ... + (w[n-1]*(f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]). INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. W - weights, array[0..N-1] C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted DiffStep- numerical differentiation step, >0; Obviously, step size should not be too large in order to get a good numerical derivative. However, it also should not be too small because numerical errors are greatly amplified by numerical differentiation. By default, symmetric 3-point formula which provides good accuracy is used. It can be changed to a faster but less precise 2-point one with minlmsetnumdiff() function. OUTPUT PARAMETERS: State - structure which stores algorithm state IMPORTANT: the LSFIT optimizer supports parallel model evaluation and parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates fits with large datasets and/or expensive target functions. Callback parallelism is usually beneficial when a single pass over the entire dataset requires more than several milliseconds. See ALGLIB Reference Manual, 'Working with commercial version' section, and comments on lsfitfit() function for more information. -- ALGLIB -- Copyright 18.10.2008 by Bochkanov Sergey *************************************************************************/
void lsfitcreatewf(const real_2d_array &x, const real_1d_array &y, const real_1d_array &w, const real_1d_array &c, const ae_int_t n, const ae_int_t m, const ae_int_t k, const double diffstep, lsfitstate &state, const xparams _xparams = alglib::xdefault); void lsfitcreatewf(const real_2d_array &x, const real_1d_array &y, const real_1d_array &w, const real_1d_array &c, const double diffstep, lsfitstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* Weighted nonlinear least squares fitting using gradient only. Nonlinear task min(F(c)) is solved, where F(c) = (w[0]*(f(c,x[0])-y[0]))^2 + ... + (w[n-1]*(f(c,x[n-1])-y[n-1]))^2, * N is a number of points, * M is a dimension of a space points belong to, * K is a dimension of a space of parameters being fitted, * w is an N-dimensional vector of weight coefficients, * x is a set of N points, each of them is an M-dimensional vector, * c is a K-dimensional vector of parameters being fitted This subroutine uses only f(c,x[i]) and its gradient. INPUT PARAMETERS: X - array[0..N-1,0..M-1], points (one row = one point) Y - array[0..N-1], function values. W - weights, array[0..N-1] C - array[0..K-1], initial approximation to the solution, N - number of points, N>1 M - dimension of space K - number of parameters being fitted OUTPUT PARAMETERS: State - structure which stores algorithm state See also: LSFitResults LSFitCreateFG (fitting without weights) LSFitCreateWFGH (fitting using Hessian) IMPORTANT: the LSFIT optimizer supports parallel model evaluation and parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates fits with large datasets and/or expensive target functions. Callback parallelism is usually beneficial when a single pass over the entire dataset requires more than several milliseconds. See ALGLIB Reference Manual, 'Working with commercial version' section, and comments on lsfitfit() function for more information. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void lsfitcreatewfg(const real_2d_array &x, const real_1d_array &y, const real_1d_array &w, const real_1d_array &c, const ae_int_t n, const ae_int_t m, const ae_int_t k, lsfitstate &state, const xparams _xparams = alglib::xdefault); void lsfitcreatewfg(const real_2d_array &x, const real_1d_array &y, const real_1d_array &w, const real_1d_array &c, lsfitstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This family of functions is used to launch iterations of nonlinear fitter These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL CALLBACK PARALLELISM: The LSFIT optimizer supports parallel model evaluation and parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates fits with large datasets and/or expensive target functions. Callback parallelism is usually beneficial when a single pass over the entire dataset requires more than several milliseconds. In this case the job of computing model values at dataset points can be split between multiple threads. If you employ a numerical differentiation scheme, you can also parallelize computation of different components of a numerical gradient. Generally, the mode computationally demanding your problem is (many points, numerical differentiation, expensive model), the more you can get for multithreading. ALGLIB Reference Manual, 'Working with commercial version' section, describes how to activate callback parallelism for your programming language. CALLBACK ARGUMENTS This algorithm is somewhat unusual because it works with parameterized function f(C,X), where X is a function argument (we have many points which are characterized by different argument values), and C is a parameter to fit. For example, if we want to do linear fit by f(c0,c1,x) = c0*x+c1, then x will be argument, and {c0,c1} will be parameters. It is important to understand that this algorithm finds minimum in the space of function PARAMETERS (not arguments), so it needs derivatives of f() with respect to C, not X. In the example above it will need f=c0*x+c1 and {df/dc0,df/dc1} = {x,1} instead of {df/dx} = {c0}. -- ALGLIB -- Copyright 17.12.2023 by Bochkanov Sergey *************************************************************************/
void lsfitfit(lsfitstate &state, void (*func)(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr), void (*rep)(const real_1d_array &c, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void lsfitfit(lsfitstate &state, void (*func)(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr), void (*grad)(const real_1d_array &c, const real_1d_array &x, double &func, real_1d_array &grad, void *ptr), void (*rep)(const real_1d_array &c, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool lsfititeration(lsfitstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Linear least squares fitting. QR decomposition is used to reduce task to MxM, then triangular solver or SVD-based solver is used depending on condition number of the system. It allows to maximize speed and retain decent accuracy. IMPORTANT: if you want to perform polynomial fitting, it may be more convenient to use PolynomialFit() function. This function gives best results on polynomial problems and solves numerical stability issues which arise when you fit high-degree polynomials to your data. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I, J] - value of J-th basis function in I-th point. N - number of points used. N>=1. M - number of basis functions, M>=1. OUTPUT PARAMETERS: C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * Rep.TerminationType is a completion code, always set to 1 which denotes success * Rep.TaskRCond reciprocal of condition number * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(F*CovPar*F')), where F is functions matrix. * Rep.Noise vector of per-point estimates of noise, array[N] NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void lsfitlinear(const real_1d_array &y, const real_2d_array &fmatrix, const ae_int_t n, const ae_int_t m, real_1d_array &c, lsfitreport &rep, const xparams _xparams = alglib::xdefault); void lsfitlinear(const real_1d_array &y, const real_2d_array &fmatrix, real_1d_array &c, lsfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Constained linear least squares fitting. This is variation of LSFitLinear(), which searchs for min|A*x=b| given that K additional constaints C*x=bc are satisfied. It reduces original task to modified one: min|B*y-d| WITHOUT constraints, then LSFitLinear() is called. IMPORTANT: if you want to perform polynomial fitting, it may be more convenient to use PolynomialFit() function. This function gives best results on polynomial problems and solves numerical stability issues which arise when you fit high-degree polynomials to your data. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I,J] - value of J-th basis function in I-th point. CMatrix - a table of constaints, array[0..K-1,0..M]. I-th row of CMatrix corresponds to I-th linear constraint: CMatrix[I,0]*C[0] + ... + CMatrix[I,M-1]*C[M-1] = CMatrix[I,M] N - number of points used. N>=1. M - number of basis functions, M>=1. K - number of constraints, 0 <= K < M K=0 corresponds to absence of constraints. OUTPUT PARAMETERS: C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * Rep.TerminationType is a completion code: * set to 1 on success * set to -3 on failure due to problematic constraints: either too many constraints (M or more), degenerate constraints (some constraints are repetead twice) or inconsistent constraints are specified * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(F*CovPar*F')), where F is functions matrix. * Rep.Noise vector of per-point estimates of noise, array[N] IMPORTANT: errors in parameters are calculated without taking into account boundary/linear constraints! Presence of constraints changes distribution of errors, but there is no easy way to account for constraints when you calculate covariance matrix. NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 07.09.2009 by Bochkanov Sergey *************************************************************************/
void lsfitlinearc(const real_1d_array &y, const real_2d_array &fmatrix, const real_2d_array &cmatrix, const ae_int_t n, const ae_int_t m, const ae_int_t k, real_1d_array &c, lsfitreport &rep, const xparams _xparams = alglib::xdefault); void lsfitlinearc(const real_1d_array &y, const real_2d_array &fmatrix, const real_2d_array &cmatrix, real_1d_array &c, lsfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Weighted linear least squares fitting. QR decomposition is used to reduce task to MxM, then triangular solver or SVD-based solver is used depending on condition number of the system. It allows to maximize speed and retain decent accuracy. IMPORTANT: if you want to perform polynomial fitting, it may be more convenient to use PolynomialFit() function. This function gives best results on polynomial problems and solves numerical stability issues which arise when you fit high-degree polynomials to your data. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. W - array[0..N-1] Weights corresponding to function values. Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I, J] - value of J-th basis function in I-th point. N - number of points used. N>=1. M - number of basis functions, M>=1. OUTPUT PARAMETERS: C - decomposition coefficients, array[0..M-1] Rep - fitting report. Following fields are set: * Rep.TerminationType always set to 1 (success) * Rep.TaskRCond reciprocal of condition number * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(F*CovPar*F')), where F is functions matrix. * Rep.Noise vector of per-point estimates of noise, array[N] NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void lsfitlinearw(const real_1d_array &y, const real_1d_array &w, const real_2d_array &fmatrix, const ae_int_t n, const ae_int_t m, real_1d_array &c, lsfitreport &rep, const xparams _xparams = alglib::xdefault); void lsfitlinearw(const real_1d_array &y, const real_1d_array &w, const real_2d_array &fmatrix, real_1d_array &c, lsfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Weighted constained linear least squares fitting. This is variation of LSFitLinearW(), which searchs for min|A*x=b| given that K additional constaints C*x=bc are satisfied. It reduces original task to modified one: min|B*y-d| WITHOUT constraints, then LSFitLinearW() is called. IMPORTANT: if you want to perform polynomial fitting, it may be more convenient to use PolynomialFit() function. This function gives best results on polynomial problems and solves numerical stability issues which arise when you fit high-degree polynomials to your data. INPUT PARAMETERS: Y - array[0..N-1] Function values in N points. W - array[0..N-1] Weights corresponding to function values. Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. FMatrix - a table of basis functions values, array[0..N-1, 0..M-1]. FMatrix[I,J] - value of J-th basis function in I-th point. CMatrix - a table of constaints, array[0..K-1,0..M]. I-th row of CMatrix corresponds to I-th linear constraint: CMatrix[I,0]*C[0] + ... + CMatrix[I,M-1]*C[M-1] = CMatrix[I,M] N - number of points used. N>=1. M - number of basis functions, M>=1. K - number of constraints, 0 <= K < M K=0 corresponds to absence of constraints. OUTPUT PARAMETERS: C - decomposition coefficients, array[0..M-1] Rep - fitting report. The following fields are set: * Rep.TerminationType is a completion code: * set to 1 on success * set to -3 on failure due to problematic constraints: either too many constraints (M or more), degenerate constraints (some constraints are repetead twice) or inconsistent constraints are specified * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(F*CovPar*F')), where F is functions matrix. * Rep.Noise vector of per-point estimates of noise, array[N] IMPORTANT: errors in parameters are calculated without taking into account boundary/linear constraints! Presence of constraints changes distribution of errors, but there is no easy way to account for constraints when you calculate covariance matrix. NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 07.09.2009 by Bochkanov Sergey *************************************************************************/
void lsfitlinearwc(const real_1d_array &y, const real_1d_array &w, const real_2d_array &fmatrix, const real_2d_array &cmatrix, const ae_int_t n, const ae_int_t m, const ae_int_t k, real_1d_array &c, lsfitreport &rep, const xparams _xparams = alglib::xdefault); void lsfitlinearwc(const real_1d_array &y, const real_1d_array &w, const real_2d_array &fmatrix, const real_2d_array &cmatrix, real_1d_array &c, lsfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Nonlinear least squares fitting results. Called after return from LSFitFit(). INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: C - array[K], solution Rep - optimization report. On success following fields are set: * TerminationType completion code: * -8 optimizer detected NAN/INF in the target function and/or gradient * -7 gradient verification failed. See LSFitSetGradientCheck() for more information. * -3 inconsistent constraints * 2 relative step is no more than EpsX. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * R2 non-adjusted coefficient of determination (non-weighted) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED * WRMSError weighted rms error on the (X,Y). ERRORS IN PARAMETERS This solver also calculates different kinds of errors in parameters and fills corresponding fields of report: * Rep.CovPar covariance matrix for parameters, array[K,K]. * Rep.ErrPar errors in parameters, array[K], errpar = sqrt(diag(CovPar)) * Rep.ErrCurve vector of fit errors - standard deviations of empirical best-fit curve from "ideal" best-fit curve built with infinite number of samples, array[N]. errcurve = sqrt(diag(J*CovPar*J')), where J is Jacobian matrix. * Rep.Noise vector of per-point estimates of noise, array[N] IMPORTANT: errors in parameters are calculated without taking into account boundary/linear constraints! Presence of constraints changes distribution of errors, but there is no easy way to account for constraints when you calculate covariance matrix. NOTE: noise in the data is estimated as follows: * for fitting without user-supplied weights all points are assumed to have same level of noise, which is estimated from the data * for fitting with user-supplied weights we assume that noise level in I-th point is inversely proportional to Ith weight. Coefficient of proportionality is estimated from the data. NOTE: we apply small amount of regularization when we invert squared Jacobian and calculate covariance matrix. It guarantees that algorithm won't divide by zero during inversion, but skews error estimates a bit (fractional error is about 10^-9). However, we believe that this difference is insignificant for all practical purposes except for the situation when you want to compare ALGLIB results with "reference" implementation up to the last significant digit. NOTE: covariance matrix is estimated using correction for degrees of freedom (covariances are divided by N-M instead of dividing by N). -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void lsfitresults(const lsfitstate &state, real_1d_array &c, lsfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This function sets boundary constraints for underlying optimizer Boundary constraints are inactive by default (after initial creation). They are preserved until explicitly turned off with another SetBC() call. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[K]. If some (all) variables are unbounded, you may specify very small number or -INF (latter is recommended because it will allow solver to use better algorithm). BndU - upper bounds, array[K]. If some (all) variables are unbounded, you may specify very large number or +INF (latter is recommended because it will allow solver to use better algorithm). NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: unlike other constrained optimization algorithms, this solver has following useful properties: * bound constraints are always satisfied exactly * function is evaluated only INSIDE area specified by bound constraints -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void lsfitsetbc(lsfitstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* Stopping conditions for nonlinear least squares fitting. INPUT PARAMETERS: State - structure which stores algorithm state EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by LSFitSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Only Levenberg-Marquardt iterations are counted (L-BFGS/CG iterations are NOT counted because their cost is very low compared to that of LM). NOTE Passing EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (according to the scheme used by MINLM unit). -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void lsfitsetcond(lsfitstate &state, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This subroutine turns on verification of the user-supplied analytic gradient: * user calls this subroutine before fitting begins * LSFitFit() is called * prior to actual fitting, for each point in data set X_i and each component of parameters being fited C_j algorithm performs following steps: * two trial steps are made to C_j-TestStep*S[j] and C_j+TestStep*S[j], where C_j is j-th parameter and S[j] is a scale of j-th parameter * if needed, steps are bounded with respect to constraints on C[] * F(X_i|C) is evaluated at these trial points * we perform one more evaluation in the middle point of the interval * we build cubic model using function values and derivatives at trial points and we compare its prediction with actual value in the middle point * in case difference between prediction and actual value is higher than some predetermined threshold, algorithm stops with completion code -7; Rep.VarIdx is set to index of the parameter with incorrect derivative. * after verification is over, algorithm proceeds to the actual optimization. NOTE 1: verification needs N*K (points count * parameters count) gradient evaluations. It is very costly and you should use it only for low dimensional problems, when you want to be sure that you've correctly calculated analytic derivatives. You should not use it in the production code (unless you want to check derivatives provided by some third party). NOTE 2: you should carefully choose TestStep. Value which is too large (so large that function behaviour is significantly non-cubic) will lead to false alarms. You may use different step for different parameters by means of setting scale with LSFitSetScale(). NOTE 3: this function may lead to false positives. In case it reports that I-th derivative was calculated incorrectly, you may decrease test step and try one more time - maybe your function changes too sharply and your step is too large for such rapidly chanding function. NOTE 4: this function works only for optimizers created with LSFitCreateWFG() or LSFitCreateFG() constructors. INPUT PARAMETERS: State - structure used to store algorithm state TestStep - verification step: * TestStep=0 turns verification off * TestStep>0 activates verification -- ALGLIB -- Copyright 15.06.2012 by Bochkanov Sergey *************************************************************************/
void lsfitsetgradientcheck(lsfitstate &state, const double teststep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets linear constraints for underlying optimizer Linear constraints are inactive by default (after initial creation). They are preserved until explicitly turned off with another SetLC() call. INPUT PARAMETERS: State - structure stores algorithm state C - linear constraints, array[K,N+1]. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. CT - type of constraints, array[K]: * if CT[i]>0, then I-th constraint is C[i,*]*x >= C[i,n+1] * if CT[i]=0, then I-th constraint is C[i,*]*x = C[i,n+1] * if CT[i]<0, then I-th constraint is C[i,*]*x <= C[i,n+1] K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT IMPORTANT: if you have linear constraints, it is strongly recommended to set scale of variables with lsfitsetscale(). QP solver which is used to calculate linearly constrained steps heavily relies on good scaling of input problems. NOTE: linear (non-box) constraints are satisfied only approximately - there always exists some violation due to numerical errors and algorithmic limitations. NOTE: general linear constraints add significant overhead to solution process. Although solver performs roughly same amount of iterations (when compared with similar box-only constrained problem), each iteration now involves solution of linearly constrained QP subproblem, which requires ~3-5 times more Cholesky decompositions. Thus, if you can reformulate your problem in such way this it has only box constraints, it may be beneficial to do so. -- ALGLIB -- Copyright 29.04.2017 by Bochkanov Sergey *************************************************************************/
void lsfitsetlc(lsfitstate &state, const real_2d_array &c, const integer_1d_array &ct, const ae_int_t k, const xparams _xparams = alglib::xdefault); void lsfitsetlc(lsfitstate &state, const real_2d_array &c, const integer_1d_array &ct, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to activate/deactivate nonmonotonic steps. Such steps may improve convergence on noisy problems or ones with minor smoothness defects. In its standard mode, LSFIT solver compares squared errors f[1] at the trial point with the value at the current point f[0]. Only steps that decrease f() are accepted. When the nonmonotonic mode is activated, f[1] is compared with maximum over several previous locations: max(f[0],f[-1],...,f[-CNT]). We still accept only steps that decrease f(), however our reference value has changed. The net results is that f[1]>f[0] are now allowed. Nonmonotonic steps can help to handle minor defects in the objective (e.g. small noise, discontinuous jumps or nonsmoothness). However, it is important that the overall shape of the problem is still smooth. It may also help to minimize perfectly smooth targets with complex geometries by allowing to jump through curved valleys. However, sometimes nonmonotonic steps degrade convergence by allowing an optimizer to wander too far away from the solution, so this feature should be used only after careful testing. INPUT PARAMETERS: State - structure stores algorithm state Cnt - nonmonotonic memory length, Cnt>=0: * 0 for traditional monotonic steps * 2..3 is recommended for the nonmonotonic optimization -- ALGLIB -- Copyright 07.04.2024 by Bochkanov Sergey *************************************************************************/
void lsfitsetnonmonotonicsteps(lsfitstate &state, const ae_int_t cnt, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets specific finite difference formula to be used for numerical differentiation. It works only for optimizers configured to use numerical differentiation; in other cases it has no effect. INPUT PARAMETERS: State - optimizer FormulaType - formula type: * 3 for a 3-point formula, which is also known as a symmetric difference quotient (the formula actually uses only two function values per variable: at x+h and x-h). A good choice for medium-accuracy setups, a default option. * 2 for a forward (or backward, depending on variable bounds) finite difference (f(x+h)-f(x))/h. This formula has the lowest accuracy. However, it is 4x faster than the 5-point formula and 2x faster than the 3-point one because, in addition to the central value f(x), it needs only one additional function evaluation per variable. -- ALGLIB -- Copyright 03.12.2024 by Bochkanov Sergey *************************************************************************/
void lsfitsetnumdiff(lsfitstate &state, const ae_int_t formulatype, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets scaling coefficients for underlying optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Generally, scale is NOT considered to be a form of preconditioner. But LM optimizer is unique in that it uses scaling matrix both in the stopping condition tests and as Marquardt damping factor. Proper scaling is very important for the algorithm performance. It is less important for the quality of results, but still has some influence (it is easier to converge when variables are properly scaled, so premature stopping is possible when very badly scalled variables are combined with relaxed stopping conditions). INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void lsfitsetscale(lsfitstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. NOTE: non-zero StpMax leads to moderate performance degradation because intermediate step of preconditioned L-BFGS optimization is incompatible with limits on step size. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void lsfitsetstpmax(lsfitstate &state, const double stpmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not When reports are needed, State.C (current parameters) and State.F (current value of fitting function) are reported. -- ALGLIB -- Copyright 15.08.2010 by Bochkanov Sergey *************************************************************************/
void lsfitsetxrep(lsfitstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine fits piecewise linear curve to points with Ramer-Douglas- Peucker algorithm, which stops after achieving desired precision. IMPORTANT: * it performs non-least-squares fitting; it builds curve, but this curve does not minimize some least squares metric. See description of RDP algorithm (say, in Wikipedia) for more details on WHAT is performed. * this function does NOT work with parametric curves (i.e. curves which can be represented as {X(t),Y(t)}. It works with curves which can be represented as Y(X). Thus, it is impossible to model figures like circles with this functions. If you want to work with parametric curves, you should use ParametricRDPFixed() function provided by "Parametric" subpackage of "Interpolation" package. INPUT PARAMETERS: X - array of X-coordinates: * at least N elements * can be unordered (points are automatically sorted) * this function may accept non-distinct X (see below for more information on handling of such inputs) Y - array of Y-coordinates: * at least N elements N - number of elements in X/Y Eps - positive number, desired precision. OUTPUT PARAMETERS: X2 - X-values of corner points for piecewise approximation, has length NSections+1 or zero (for NSections=0). Y2 - Y-values of corner points, has length NSections+1 or zero (for NSections=0). NSections- number of sections found by algorithm, NSections can be zero for degenerate datasets (N<=1 or all X[] are non-distinct). NOTE: X2/Y2 are ordered arrays, i.e. (X2[0],Y2[0]) is a first point of curve, (X2[NSection-1],Y2[NSection-1]) is the last point. -- ALGLIB -- Copyright 02.10.2014 by Bochkanov Sergey *************************************************************************/
void lstfitpiecewiselinearrdp(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const double eps, real_1d_array &x2, real_1d_array &y2, ae_int_t &nsections, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine fits piecewise linear curve to points with Ramer-Douglas- Peucker algorithm, which stops after generating specified number of linear sections. IMPORTANT: * it does NOT perform least-squares fitting; it builds curve, but this curve does not minimize some least squares metric. See description of RDP algorithm (say, in Wikipedia) for more details on WHAT is performed. * this function does NOT work with parametric curves (i.e. curves which can be represented as {X(t),Y(t)}. It works with curves which can be represented as Y(X). Thus, it is impossible to model figures like circles with this functions. If you want to work with parametric curves, you should use ParametricRDPFixed() function provided by "Parametric" subpackage of "Interpolation" package. INPUT PARAMETERS: X - array of X-coordinates: * at least N elements * can be unordered (points are automatically sorted) * this function may accept non-distinct X (see below for more information on handling of such inputs) Y - array of Y-coordinates: * at least N elements N - number of elements in X/Y M - desired number of sections: * at most M sections are generated by this function * less than M sections can be generated if we have N<M (or some X are non-distinct). OUTPUT PARAMETERS: X2 - X-values of corner points for piecewise approximation, has length NSections+1 or zero (for NSections=0). Y2 - Y-values of corner points, has length NSections+1 or zero (for NSections=0). NSections- number of sections found by algorithm, NSections<=M, NSections can be zero for degenerate datasets (N<=1 or all X[] are non-distinct). NOTE: X2/Y2 are ordered arrays, i.e. (X2[0],Y2[0]) is a first point of curve, (X2[NSection-1],Y2[NSection-1]) is the last point. -- ALGLIB -- Copyright 02.10.2014 by Bochkanov Sergey *************************************************************************/
void lstfitpiecewiselinearrdpfixed(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t m, real_1d_array &x2, real_1d_array &y2, ae_int_t &nsections, const xparams _xparams = alglib::xdefault);
/************************************************************************* Fitting by polynomials in barycentric form. This function provides simple unterface for unconstrained unweighted fitting. See PolynomialFitWC() if you need constrained fitting. The task is linear, thus the linear least squares solver is used. The complexity of this computational scheme is O(N*M^2), mostly dominated by the least squares solver SEE ALSO: PolynomialFitWC() NOTES: you can convert P from barycentric form to the power or Chebyshev basis with PolynomialBar2Pow() or PolynomialBar2Cheb() functions from POLINT subpackage. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. N - number of points, N>0 * if given, only leading N elements of X/Y are used * if not given, automatically determined from sizes of X/Y M - number of basis functions (= polynomial_degree + 1), M>=1 OUTPUT PARAMETERS: P - interpolant in barycentric form for Rep.TerminationType>0. undefined for Rep.TerminationType<0. Rep - fitting report. The following fields are set: * Rep.TerminationType is a completion code which is always set to 1 (success) * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB PROJECT -- Copyright 10.12.2009 by Bochkanov Sergey *************************************************************************/
void polynomialfit(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t m, barycentricinterpolant &p, polynomialfitreport &rep, const xparams _xparams = alglib::xdefault); void polynomialfit(const real_1d_array &x, const real_1d_array &y, const ae_int_t m, barycentricinterpolant &p, polynomialfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Weighted fitting by polynomials in barycentric form, with constraints on function values or first derivatives. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver SEE ALSO: PolynomialFit() NOTES: you can convert P from barycentric form to the power or Chebyshev basis with PolynomialBar2Pow() or PolynomialBar2Cheb() functions from the POLINT subpackage. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points, N>0. * if given, only leading N elements of X/Y/W are used * if not given, automatically determined from sizes of X/Y/W XC - points where polynomial values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that P(XC[i])=YC[i] * DC[i]=1 means that P'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints, 0<=K<M. K=0 means no constraints (XC/YC/DC are not used in such cases) M - number of basis functions (= polynomial_degree + 1), M>=1 OUTPUT PARAMETERS: P - interpolant in barycentric form for Rep.TerminationType>0. undefined for Rep.TerminationType<0. Rep - fitting report. The following fields are set: * Rep.TerminationType is a completion code: * set to 1 on success * set to -3 on failure due to problematic constraints: either too many constraints, degenerate constraints or inconsistent constraints were passed * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * even simple constraints can be inconsistent, see Wikipedia article on this subject: http://en.wikipedia.org/wiki/Birkhoff_interpolation * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints is NOT GUARANTEED. * in the one special cases, however, we can guarantee consistency. This case is: M>1 and constraints on the function values (NOT DERIVATIVES) Our final recommendation is to use constraints WHEN AND ONLY when you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB PROJECT -- Copyright 10.12.2009 by Bochkanov Sergey *************************************************************************/
void polynomialfitwc(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const ae_int_t n, const real_1d_array &xc, const real_1d_array &yc, const integer_1d_array &dc, const ae_int_t k, const ae_int_t m, barycentricinterpolant &p, polynomialfitreport &rep, const xparams _xparams = alglib::xdefault); void polynomialfitwc(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const real_1d_array &xc, const real_1d_array &yc, const integer_1d_array &dc, const ae_int_t m, barycentricinterpolant &p, polynomialfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Weighted fitting by cubic spline, with constraints on function values or derivatives. Equidistant grid with M-2 nodes on [min(x,xc),max(x,xc)] is used to build basis functions. Basis functions are cubic splines with continuous second derivatives and non-fixed first derivatives at interval ends. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver IMPORTANT: ALGLIB has a much faster version of the cubic spline fitting function - spline1dfit(). This function performs least squares fit in O(max(M,N)) time/memory. However, it does not support constraints. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points (optional): * N>0 * if given, only first N elements of X/Y/W are processed * if not given, automatically determined from X/Y/W sizes XC - points where spline values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints (optional): * 0<=K<M. * K=0 means no constraints (XC/YC/DC are not used) * if given, only first K elements of XC/YC/DC are used * if not given, automatically determined from XC/YC/DC M - number of basis functions ( = number_of_nodes+2), M>=4. OUTPUT PARAMETERS: S - spline interpolant. Rep - fitting report. The following fields are set: * Rep.TerminationType is a completion code: * set to 1 on success * set to -3 on failure due to problematic constraints: either too many constraints, degenerate constraints or inconsistent constraints were passed * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * excessive constraints can be inconsistent. Splines are piecewise cubic functions, and it is easy to create an example, where large number of constraints concentrated in small area will result in inconsistency. Just because spline is not flexible enough to satisfy all of them. And same constraints spread across the [min(x),max(x)] will be perfectly consistent. * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints IS NOT GUARANTEED. * in the several special cases, however, we CAN guarantee consistency. * one of this cases is constraints on the function values AND/OR its derivatives at the interval boundaries. * another special case is ONE constraint on the function value (OR, but not AND, derivative) anywhere in the interval Our final recommendation is to use constraints WHEN AND ONLY WHEN you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
void spline1dfitcubicwc(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const ae_int_t n, const real_1d_array &xc, const real_1d_array &yc, const integer_1d_array &dc, const ae_int_t k, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault); void spline1dfitcubicwc(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const real_1d_array &xc, const real_1d_array &yc, const integer_1d_array &dc, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Deprecated fitting function with O(N*M^2+M^3) running time. Superseded by spline1dfit(). -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
void spline1dfithermitedeprecated(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault); void spline1dfithermitedeprecated(const real_1d_array &x, const real_1d_array &y, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Weighted fitting by Hermite spline, with constraints on function values or first derivatives. Equidistant grid with M nodes on [min(x,xc),max(x,xc)] is used to build basis functions. Basis functions are Hermite splines. Small regularizing term is used when solving constrained tasks (to improve stability). Task is linear, so linear least squares solver is used. Complexity of this computational scheme is O(N*M^2), mostly dominated by least squares solver IMPORTANT: ALGLIB has a much faster version of the cubic spline fitting function - spline1dfit(). This function performs least squares fit in O(max(M,N)) time/memory. However, it does not support constraints. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. W - weights, array[0..N-1] Each summand in square sum of approximation deviations from given values is multiplied by the square of corresponding weight. Fill it by 1's if you don't want to solve weighted task. N - number of points (optional): * N>0 * if given, only first N elements of X/Y/W are processed * if not given, automatically determined from X/Y/W sizes XC - points where spline values/derivatives are constrained, array[0..K-1]. YC - values of constraints, array[0..K-1] DC - array[0..K-1], types of constraints: * DC[i]=0 means that S(XC[i])=YC[i] * DC[i]=1 means that S'(XC[i])=YC[i] SEE BELOW FOR IMPORTANT INFORMATION ON CONSTRAINTS K - number of constraints (optional): * 0<=K<M. * K=0 means no constraints (XC/YC/DC are not used) * if given, only first K elements of XC/YC/DC are used * if not given, automatically determined from XC/YC/DC M - number of basis functions (= 2 * number of nodes), M>=4, M IS EVEN! OUTPUT PARAMETERS: S - spline interpolant. Rep - fitting report. The following fields are set: * Rep.TerminationType is a completion code: * set to 1 on success * set to -3 on failure due to problematic constraints: either too many constraints, degenerate constraints or inconsistent constraints were passed * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error NON-WEIGHTED ERRORS ARE CALCULATED IMPORTANT: this subroitine doesn't calculate task's condition number for K<>0. IMPORTANT: this subroitine supports only even M's ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING CONSTRAINTS - DANGERS AND OPPORTUNITIES: Setting constraints can lead to undesired results, like ill-conditioned behavior, or inconsistency being detected. From the other side, it allows us to improve quality of the fit. Here we summarize our experience with constrained regression splines: * excessive constraints can be inconsistent. Splines are piecewise cubic functions, and it is easy to create an example, where large number of constraints concentrated in small area will result in inconsistency. Just because spline is not flexible enough to satisfy all of them. And same constraints spread across the [min(x),max(x)] will be perfectly consistent. * the more evenly constraints are spread across [min(x),max(x)], the more chances that they will be consistent * the greater is M (given fixed constraints), the more chances that constraints will be consistent * in the general case, consistency of constraints is NOT GUARANTEED. * in the several special cases, however, we can guarantee consistency. * one of this cases is M>=4 and constraints on the function value (AND/OR its derivative) at the interval boundaries. * another special case is M>=4 and ONE constraint on the function value (OR, BUT NOT AND, derivative) anywhere in [min(x),max(x)] Our final recommendation is to use constraints WHEN AND ONLY when you can't solve your task without them. Anything beyond special cases given above is not guaranteed and may result in inconsistency. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB PROJECT -- Copyright 18.08.2009 by Bochkanov Sergey *************************************************************************/
void spline1dfithermitewc(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const ae_int_t n, const real_1d_array &xc, const real_1d_array &yc, const integer_1d_array &dc, const ae_int_t k, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault); void spline1dfithermitewc(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const real_1d_array &xc, const real_1d_array &yc, const integer_1d_array &dc, const ae_int_t m, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // In this example we demonstrate linear fitting by f(x|a) = a*exp(0.5*x).
        //
        // We have:
        // * y - vector of experimental data
        // * fmatrix -  matrix of basis functions calculated at sample points
        //              Actually, we have only one basis function F0 = exp(0.5*x).
        //
        real_2d_array fmatrix = "[[0.606531],[0.670320],[0.740818],[0.818731],[0.904837],[1.000000],[1.105171],[1.221403],[1.349859],[1.491825],[1.648721]]";
        real_1d_array y = "[1.133719, 1.306522, 1.504604, 1.554663, 1.884638, 2.072436, 2.257285, 2.534068, 2.622017, 2.897713, 3.219371]";
        real_1d_array c;
        lsfitreport rep;

        //
        // Linear fitting without weights
        //
        lsfitlinear(y, fmatrix, c, rep);
        printf("%s\n", c.tostring(4).c_str()); // EXPECTED: [1.98650]

        //
        // Linear fitting with individual weights.
        // Slightly different result is returned.
        //
        real_1d_array w = "[1.414213, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]";
        lsfitlinearw(y, w, fmatrix, c, rep);
        printf("%s\n", c.tostring(4).c_str()); // EXPECTED: [1.983354]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // In this example we demonstrate linear fitting by f(x|a,b) = a*x+b
        // with simple constraint f(0)=0.
        //
        // We have:
        // * y - vector of experimental data
        // * fmatrix -  matrix of basis functions sampled at [0,1] with step 0.2:
        //                  [ 1.0   0.0 ]
        //                  [ 1.0   0.2 ]
        //                  [ 1.0   0.4 ]
        //                  [ 1.0   0.6 ]
        //                  [ 1.0   0.8 ]
        //                  [ 1.0   1.0 ]
        //              first column contains value of first basis function (constant term)
        //              second column contains second basis function (linear term)
        // * cmatrix -  matrix of linear constraints:
        //                  [ 1.0  0.0  0.0 ]
        //              first two columns contain coefficients before basis functions,
        //              last column contains desired value of their sum.
        //              So [1,0,0] means "1*constant_term + 0*linear_term = 0" 
        //
        real_1d_array y = "[0.072436,0.246944,0.491263,0.522300,0.714064,0.921929]";
        real_2d_array fmatrix = "[[1,0.0],[1,0.2],[1,0.4],[1,0.6],[1,0.8],[1,1.0]]";
        real_2d_array cmatrix = "[[1,0,0]]";
        real_1d_array c;
        lsfitreport rep;

        //
        // Constrained fitting without weights
        //
        lsfitlinearc(y, fmatrix, cmatrix, c, rep);
        printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [0,0.932933]

        //
        // Constrained fitting with individual weights
        //
        real_1d_array w = "[1, 1.414213, 1, 1, 1, 1]";
        lsfitlinearwc(y, w, fmatrix, cmatrix, c, rep);
        printf("%s\n", c.tostring(3).c_str()); // EXPECTED: [0,0.938322]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;
void function_cx_1_func(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr) 
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0))
    // where x is a position on X-axis and c is adjustable parameter
    func = exp(-c[0]*pow(x[0],2));
}
int main(int argc, char **argv)
{
    try
    {
        //
        // In this example we demonstrate exponential fitting by
        //
        //     f(x) = exp(-c*x^2)
        //
        // using numerical differentiation.
        //
        // IMPORTANT: the LSFIT optimizer supports parallel model  evaluation  and
        //            parallel numerical differentiation ('callback parallelism').
        //            This feature, which is present in commercial ALGLIB editions
        //            greatly  accelerates  fits  with   large   datasets   and/or
        //            expensive target functions.
        //
        //            Callback parallelism is usually  beneficial  when  a  single
        //            pass over the entire  dataset  requires  more  than  several
        //            milliseconds. This particular example,  of  course,  is  not
        //            suited for callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on  lsfitfit()  function  for  more
        //            information.
        //
        real_2d_array x = "[[-1],[-0.8],[-0.6],[-0.4],[-0.2],[0],[0.2],[0.4],[0.6],[0.8],[1.0]]";
        real_1d_array y = "[0.223130, 0.382893, 0.582748, 0.786628, 0.941765, 1.000000, 0.941765, 0.786628, 0.582748, 0.382893, 0.223130]";
        real_1d_array c = "[0.3]";
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        lsfitstate state;
        lsfitreport rep;
        double diffstep = 0.0001;

        //
        // Fitting without weights
        //
        lsfitcreatef(x, y, c, diffstep, state);
        lsfitsetcond(state, epsx, maxits);
        alglib::lsfitfit(state, function_cx_1_func);
        lsfitresults(state, c, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 2
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5]

        //
        // Fitting with weights
        // (you can change weights and see how it changes result)
        //
        real_1d_array w = "[1,1,1,1,1,1,1,1,1,1,1]";
        lsfitcreatewf(x, y, w, c, diffstep, state);
        lsfitsetcond(state, epsx, maxits);
        alglib::lsfitfit(state, function_cx_1_func);
        lsfitresults(state, c, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 2
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;
void function_cx_1_func(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr) 
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0))
    // where x is a position on X-axis and c is adjustable parameter
    func = exp(-c[0]*pow(x[0],2));
}
int main(int argc, char **argv)
{
    try
    {
        //
        // In this example we demonstrate exponential fitting by
        //
        //     f(x) = exp(-c*x^2)
        //
        // subject to box constraints
        //
        //     0.0 <= c <= 1.0
        //
        // using function value only. An unconstrained solution is c=1.5, but because of
        // constraints we should get c=1.0 (at the boundary).
        //
        // IMPORTANT: the LSFIT optimizer supports parallel model  evaluation  and
        //            parallel numerical differentiation ('callback parallelism').
        //            This feature, which is present in commercial ALGLIB editions
        //            greatly  accelerates  fits  with   large   datasets   and/or
        //            expensive target functions.
        //
        //            Callback parallelism is usually  beneficial  when  a  single
        //            pass over the entire  dataset  requires  more  than  several
        //            milliseconds. This particular example,  of  course,  is  not
        //            suited for callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on  lsfitfit()  function  for  more
        //            information.
        //
        real_2d_array x = "[[-1],[-0.8],[-0.6],[-0.4],[-0.2],[0],[0.2],[0.4],[0.6],[0.8],[1.0]]";
        real_1d_array y = "[0.223130, 0.382893, 0.582748, 0.786628, 0.941765, 1.000000, 0.941765, 0.786628, 0.582748, 0.382893, 0.223130]";
        real_1d_array c = "[0.3]";
        real_1d_array bndl = "[0.0]";
        real_1d_array bndu = "[1.0]";
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        lsfitstate state;
        lsfitreport rep;
        double diffstep = 0.0001;

        lsfitcreatef(x, y, c, diffstep, state);
        lsfitsetbc(state, bndl, bndu);
        lsfitsetcond(state, epsx, maxits);
        alglib::lsfitfit(state, function_cx_1_func);
        lsfitresults(state, c, rep);
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.0]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;
void function_cx_1_func(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr) 
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0))
    // where x is a position on X-axis and c is adjustable parameter
    func = exp(-c[0]*pow(x[0],2));
}
void function_cx_1_grad(const real_1d_array &c, const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) 
{
    // this callback calculates f(c,x)=exp(-c0*sqr(x0)) and gradient G={df/dc[i]}
    // where x is a position on X-axis and c is adjustable parameter.
    // IMPORTANT: gradient is calculated with respect to C, not to X
    func = exp(-c[0]*pow(x[0],2));
    grad[0] = -pow(x[0],2)*func;
}
int main(int argc, char **argv)
{
    try
    {
        //
        // In this example we demonstrate exponential fitting by
        //
        //     f(x) = exp(-c*x^2)
        //
        // using function value and gradient (with respect to c).
        //
        // IMPORTANT: the LSFIT optimizer supports parallel model  evaluation  and
        //            parallel numerical differentiation ('callback parallelism').
        //            This feature, which is present in commercial ALGLIB editions
        //            greatly  accelerates  fits  with   large   datasets   and/or
        //            expensive target functions.
        //
        //            Callback parallelism is usually  beneficial  when  a  single
        //            pass over the entire  dataset  requires  more  than  several
        //            milliseconds. This particular example,  of  course,  is  not
        //            suited for callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on  lsfitfit()  function  for  more
        //            information.
        //
        real_2d_array x = "[[-1],[-0.8],[-0.6],[-0.4],[-0.2],[0],[0.2],[0.4],[0.6],[0.8],[1.0]]";
        real_1d_array y = "[0.223130, 0.382893, 0.582748, 0.786628, 0.941765, 1.000000, 0.941765, 0.786628, 0.582748, 0.382893, 0.223130]";
        real_1d_array c = "[0.3]";
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        lsfitstate state;
        lsfitreport rep;

        //
        // Fitting without weights
        //
        lsfitcreatefg(x, y, c, state);
        lsfitsetcond(state, epsx, maxits);
        alglib::lsfitfit(state, function_cx_1_func, function_cx_1_grad);
        lsfitresults(state, c, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 2
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5]

        //
        // Fitting with weights
        // (you can change weights and see how it changes result)
        //
        real_1d_array w = "[1,1,1,1,1,1,1,1,1,1,1]";
        lsfitcreatewfg(x, y, w, c, state);
        lsfitsetcond(state, epsx, maxits);
        alglib::lsfitfit(state, function_cx_1_func, function_cx_1_grad);
        lsfitresults(state, c, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 2
        printf("%s\n", c.tostring(1).c_str()); // EXPECTED: [1.5]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;
void function_debt_func(const real_1d_array &c, const real_1d_array &x, double &func, void *ptr) 
{
    //
    // this callback calculates f(c,x)=c[0]*(1+c[1]*(pow(x[0]-1999,c[2])-1))
    //
    func = c[0]*(1+c[1]*(pow(x[0]-1999,c[2])-1));
}
int main(int argc, char **argv)
{
    try
    {
        //
        // In this example we demonstrate fitting by
        //
        //     f(x) = c[0]*(1+c[1]*((x-1999)^c[2]-1))
        //
        // subject to box constraints
        //
        //     -INF  < c[0] < +INF
        //      -10 <= c[1] <= +10
        //      0.1 <= c[2] <= 2.0
        //
        // The data we want to fit are time series of Japan national debt
        // collected from 2000 to 2008 measured in USD (dollars, not
        // millions of dollars).
        //
        // Our variables are:
        //     c[0] - debt value at initial moment (2000),
        //     c[1] - direction coefficient (growth or decrease),
        //     c[2] - curvature coefficient.
        // You may see that our variables are badly scaled - first one 
        // is order of 10^12, and next two are somewhere about 1 in 
        // magnitude. Such problem is difficult to solve without some
        // kind of scaling.
        // That is exactly where lsfitsetscale() function can be used.
        // We set scale of our variables to [1.0E12, 1, 1], which allows
        // us to easily solve this problem.
        //
        // You can try commenting out lsfitsetscale() call - and you will 
        // see that algorithm will fail to converge.
        //
        // IMPORTANT: the LSFIT optimizer supports parallel model  evaluation  and
        //            parallel numerical differentiation ('callback parallelism').
        //            This feature, which is present in commercial ALGLIB editions
        //            greatly  accelerates  fits  with   large   datasets   and/or
        //            expensive target functions.
        //
        //            Callback parallelism is usually  beneficial  when  a  single
        //            pass over the entire  dataset  requires  more  than  several
        //            milliseconds. This particular example,  of  course,  is  not
        //            suited for callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on  lsfitfit()  function  for  more
        //            information.
        //
        real_2d_array x = "[[2000],[2001],[2002],[2003],[2004],[2005],[2006],[2007],[2008]]";
        real_1d_array y = "[4323239600000.0, 4560913100000.0, 5564091500000.0, 6743189300000.0, 7284064600000.0, 7050129600000.0, 7092221500000.0, 8483907600000.0, 8625804400000.0]";
        real_1d_array c = "[1.0e+13, 1, 1]";
        double epsx = 1.0e-5;
        real_1d_array bndl = "[-inf, -10, 0.1]";
        real_1d_array bndu = "[+inf, +10, 2.0]";
        real_1d_array s = "[1.0e+12, 1, 1]";
        ae_int_t maxits = 0;
        lsfitstate state;
        lsfitreport rep;
        double diffstep = 1.0e-5;

        lsfitcreatef(x, y, c, diffstep, state);
        lsfitsetcond(state, epsx, maxits);
        lsfitsetbc(state, bndl, bndu);
        lsfitsetscale(state, s);
        alglib::lsfitfit(state, function_debt_func);
        lsfitresults(state, c, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 2
        printf("%s\n", c.tostring(-2).c_str()); // EXPECTED: [4.142560E+12, 0.434240, 0.565376]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates polynomial fitting.
        //
        // Fitting is done by two (M=2) functions from polynomial basis:
        //     f0 = 1
        //     f1 = x
        // Basically, it just a linear fit; more complex polynomials may be used
        // (e.g. parabolas with M=3, cubic with M=4), but even such simple fit allows
        // us to demonstrate polynomialfit() function in action.
        //
        // We have:
        // * x      set of abscissas
        // * y      experimental data
        //
        // Additionally we demonstrate weighted fitting, where second point has
        // more weight than other ones.
        //
        real_1d_array x = "[0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0]";
        real_1d_array y = "[0.00,0.05,0.26,0.32,0.33,0.43,0.60,0.60,0.77,0.98,1.02]";
        ae_int_t m = 2;
        double t = 2;
        barycentricinterpolant p;
        polynomialfitreport rep;
        double v;

        //
        // Fitting without individual weights
        //
        // NOTE: result is returned as barycentricinterpolant structure.
        //       if you want to get representation in the power basis,
        //       you can use barycentricbar2pow() function to convert
        //       from barycentric to power representation (see docs for 
        //       POLINT subpackage for more info).
        //
        polynomialfit(x, y, m, p, rep);
        v = barycentriccalc(p, t);
        printf("%.2f\n", double(v)); // EXPECTED: 2.011

        //
        // Fitting with individual weights
        //
        // NOTE: slightly different result is returned
        //
        real_1d_array w = "[1,1.414213562,1,1,1,1,1,1,1,1,1]";
        real_1d_array xc = "[]";
        real_1d_array yc = "[]";
        integer_1d_array dc = "[]";
        polynomialfitwc(x, y, w, xc, yc, dc, m, p, rep);
        v = barycentriccalc(p, t);
        printf("%.2f\n", double(v)); // EXPECTED: 2.023
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates polynomial fitting.
        //
        // Fitting is done by two (M=2) functions from polynomial basis:
        //     f0 = 1
        //     f1 = x
        // with simple constraint on function value
        //     f(0) = 0
        // Basically, it just a linear fit; more complex polynomials may be used
        // (e.g. parabolas with M=3, cubic with M=4), but even such simple fit allows
        // us to demonstrate polynomialfit() function in action.
        //
        // We have:
        // * x      set of abscissas
        // * y      experimental data
        // * xc     points where constraints are placed
        // * yc     constraints on derivatives
        // * dc     derivative indices
        //          (0 means function itself, 1 means first derivative)
        //
        real_1d_array x = "[1.0,1.0]";
        real_1d_array y = "[0.9,1.1]";
        real_1d_array w = "[1,1]";
        real_1d_array xc = "[0]";
        real_1d_array yc = "[0]";
        integer_1d_array dc = "[0]";
        double t = 2;
        ae_int_t m = 2;
        barycentricinterpolant p;
        polynomialfitreport rep;
        double v;

        polynomialfitwc(x, y, w, xc, yc, dc, m, p, rep);
        v = barycentriccalc(p, t);
        printf("%.2f\n", double(v)); // EXPECTED: 2.000
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // In this example we demonstrate penalized spline fitting of noisy data
        //
        // We have:
        // * x - abscissas
        // * y - vector of experimental data, straight line with small noise
        //
        real_1d_array x = "[0.00,0.10,0.20,0.30,0.40,0.50,0.60,0.70,0.80,0.90]";
        real_1d_array y = "[0.10,0.00,0.30,0.40,0.30,0.40,0.62,0.68,0.75,0.95]";
        double v;
        spline1dinterpolant s;
        spline1dfitreport rep;

        //
        // Fit with VERY small amount of smoothing (eps = 1.0E-9)
        // and large number of basis functions (M=50).
        //
        // With such small regularization penalized spline almost fully reproduces function values
        //
        spline1dfit(x, y, 50, 0.000000001, s, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        v = spline1dcalc(s, 0.0);
        printf("%.1f\n", double(v)); // EXPECTED: 0.10

        //
        // Fit with VERY large amount of smoothing eps=1000000
        // and large number of basis functions (M=50).
        //
        // With such regularization our spline should become close to the straight line fit.
        // We will compare its value in x=1.0 with results obtained from such fit.
        //
        spline1dfit(x, y, 50, 1000000, s, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        v = spline1dcalc(s, 1.0);
        printf("%.2f\n", double(v)); // EXPECTED: 0.969

        //
        // In real life applications you may need some moderate degree of fitting,
        // so we try to fit once more with eps=0.1.
        //
        spline1dfit(x, y, 50, 0.1, s, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_1d_array x = "[1,2,3,4,5,6,7,8]";
        real_1d_array y = "[0.06313223,0.44552624,0.61838364,0.71385108,0.77345838,0.81383140,0.84280033,0.86449822]";
        ae_int_t n = 8;
        double a;
        double b;
        double c;
        double d;
        lsfitreport rep;

        //
        // Test logisticfit4() on carefully designed data with a priori known answer.
        //
        logisticfit4(x, y, n, a, b, c, d, rep);
        printf("%.1f\n", double(a)); // EXPECTED: -1.000
        printf("%.1f\n", double(b)); // EXPECTED: 1.200
        printf("%.1f\n", double(c)); // EXPECTED: 0.900
        printf("%.1f\n", double(d)); // EXPECTED: 1.000

        //
        // Evaluate model at point x=0.5
        //
        double v;
        v = logisticcalc4(0.5, a, b, c, d);
        printf("%.2f\n", double(v)); // EXPECTED: -0.33874308
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_1d_array x = "[1,2,3,4,5,6,7,8]";
        real_1d_array y = "[0.1949776139,0.5710060208,0.726002637,0.8060434158,0.8534547965,0.8842071579,0.9054773317,0.9209088299]";
        ae_int_t n = 8;
        double a;
        double b;
        double c;
        double d;
        double g;
        lsfitreport rep;

        //
        // Test logisticfit5() on carefully designed data with a priori known answer.
        //
        logisticfit5(x, y, n, a, b, c, d, g, rep);
        printf("%.1f\n", double(a)); // EXPECTED: -1.000
        printf("%.1f\n", double(b)); // EXPECTED: 1.200
        printf("%.1f\n", double(c)); // EXPECTED: 0.900
        printf("%.1f\n", double(d)); // EXPECTED: 1.000
        printf("%.1f\n", double(g)); // EXPECTED: 1.200

        //
        // Evaluate model at point x=0.5
        //
        double v;
        v = logisticcalc5(0.5, a, b, c, d, g);
        printf("%.2f\n", double(v)); // EXPECTED: -0.2354656824
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

mannwhitneyutest
/************************************************************************* Mann-Whitney U-test This test checks hypotheses about whether X and Y are samples of two continuous distributions of the same shape and same median or whether their medians are different. The following tests are performed: * two-tailed test (null hypothesis - the medians are equal) * left-tailed test (null hypothesis - the median of the first sample is greater than or equal to the median of the second sample) * right-tailed test (null hypothesis - the median of the first sample is less than or equal to the median of the second sample). Requirements: * the samples are independent * X and Y are continuous distributions (or discrete distributions well- approximating continuous distributions) * distributions of X and Y have the same shape. The only possible difference is their position (i.e. the value of the median) * the number of elements in each sample is not less than 5 * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). The test is non-parametric and doesn't require distributions to be normal. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. N>=5 Y - sample 2. Array whose index goes from 0 to M-1. M - size of the sample. M>=5 Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. To calculate p-values, special approximation is used. This method lets us calculate p-values with satisfactory accuracy in interval [0.0001, 1]. There is no approximation outside the [0.0001, 1] interval. Therefore, if the significance level outlies this interval, the test returns 0.0001. Relative precision of approximation of p-value: N M Max.err. Rms.err. 5..10 N..10 1.4e-02 6.0e-04 5..10 N..100 2.2e-02 5.3e-06 10..15 N..15 1.0e-02 3.2e-04 10..15 N..100 1.0e-02 2.2e-05 15..100 N..100 6.1e-03 2.7e-06 For N,M>100 accuracy checks weren't put into practice, but taking into account characteristics of asymptotic approximation used, precision should not be sharply different from the values for interval [5, 100]. NOTE: P-value approximation was optimized for 0.0001<=p<=0.2500. Thus, P's outside of this interval are enforced to these bounds. Say, you may quite often get P equal to exactly 0.25 or 0.0001. -- ALGLIB -- Copyright 09.04.2007 by Bochkanov Sergey *************************************************************************/
void mannwhitneyutest(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
cmatrixdet
cmatrixludet
rmatrixdet
rmatrixludet
spdmatrixcholeskydet
spdmatrixdet
matdet_d_1 Determinant calculation, real matrix, short form
matdet_d_2 Determinant calculation, real matrix, full form
matdet_d_3 Determinant calculation, complex matrix, short form
matdet_d_4 Determinant calculation, complex matrix, full form
matdet_d_5 Determinant calculation, complex matrix with zero imaginary part, short form
/************************************************************************* Calculation of the determinant of a general matrix Input parameters: A - matrix, array[0..N-1, 0..N-1] N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: determinant of matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
alglib::complex cmatrixdet(const complex_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault); alglib::complex cmatrixdet(const complex_2d_array &a, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* Determinant calculation of the matrix given by its LU decomposition. Input parameters: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition. Output of RMatrixLU subroutine. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: matrix determinant. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
alglib::complex cmatrixludet(const complex_2d_array &a, const integer_1d_array &pivots, const ae_int_t n, const xparams _xparams = alglib::xdefault); alglib::complex cmatrixludet(const complex_2d_array &a, const integer_1d_array &pivots, const xparams _xparams = alglib::xdefault);
/************************************************************************* Calculation of the determinant of a general matrix Input parameters: A - matrix, array[0..N-1, 0..N-1] N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: determinant of matrix A. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
double rmatrixdet(const real_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault); double rmatrixdet(const real_2d_array &a, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* Determinant calculation of the matrix given by its LU decomposition. Input parameters: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations which were made during the LU decomposition. Output of RMatrixLU subroutine. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) Result: matrix determinant. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
double rmatrixludet(const real_2d_array &a, const integer_1d_array &pivots, const ae_int_t n, const xparams _xparams = alglib::xdefault); double rmatrixludet(const real_2d_array &a, const integer_1d_array &pivots, const xparams _xparams = alglib::xdefault);
/************************************************************************* Determinant calculation of the matrix given by the Cholesky decomposition. Input parameters: A - Cholesky decomposition, output of SMatrixCholesky subroutine. N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) As the determinant is equal to the product of squares of diagonal elements, it's not necessary to specify which triangle - lower or upper - the matrix is stored in. Result: matrix determinant. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/
double spdmatrixcholeskydet(const real_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault); double spdmatrixcholeskydet(const real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* Determinant calculation of the symmetric positive definite matrix. Input parameters: A - matrix, array[N,N] N - (optional) size of matrix A: * if given, only principal NxN submatrix is processed and overwritten. other elements are unchanged. * if not given, automatically determined from matrix size (A must be square matrix) IsUpper - storage type: * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by function Result: determinant of matrix A. If matrix A is not positive definite, an exception is generated. -- ALGLIB -- Copyright 2005-2008 by Bochkanov Sergey *************************************************************************/
double spdmatrixdet(const real_2d_array &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault); double spdmatrixdet(const real_2d_array &a, const bool isupper, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_2d_array b = "[[1,2],[2,1]]";
        double a;
        a = rmatrixdet(b);
        printf("%.3f\n", double(a)); // EXPECTED: -3
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_2d_array b = "[[5,4],[4,5]]";
        double a;
        a = rmatrixdet(b, 2);
        printf("%.3f\n", double(a)); // EXPECTED: 9
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        complex_2d_array b = "[[1+1i,2],[2,1-1i]]";
        alglib::complex a;
        a = cmatrixdet(b);
        printf("%s\n", a.tostring(3).c_str()); // EXPECTED: -2
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        alglib::complex a;
        complex_2d_array b = "[[5i,4],[4i,5]]";
        a = cmatrixdet(b, 2);
        printf("%s\n", a.tostring(3).c_str()); // EXPECTED: 9i
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        alglib::complex a;
        complex_2d_array b = "[[9,1],[2,1]]";
        a = cmatrixdet(b);
        printf("%s\n", a.tostring(3).c_str()); // EXPECTED: 7
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

cmatrixrndcond
cmatrixrndorthogonal
cmatrixrndorthogonalfromtheleft
cmatrixrndorthogonalfromtheright
hmatrixrndcond
hmatrixrndmultiply
hpdmatrixrndcond
rmatrixrndcond
rmatrixrndorthogonal
rmatrixrndorthogonalfromtheleft
rmatrixrndorthogonalfromtheright
smatrixrndcond
smatrixrndmultiply
spdmatrixrndcond
/************************************************************************* Generation of random NxN complex matrix with given condition number C and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void cmatrixrndcond(const ae_int_t n, const double c, complex_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* Generation of a random Haar distributed orthogonal complex matrix INPUT PARAMETERS: N - matrix size, N>=1 OUTPUT PARAMETERS: A - orthogonal NxN matrix, array[0..N-1,0..N-1] NOTE: this function uses algorithm described in Stewart, G. W. (1980), "The Efficient Generation of Random Orthogonal Matrices with an Application to Condition Estimators". Speaking short, to generate an (N+1)x(N+1) orthogonal matrix, it: * takes an NxN one * takes uniformly distributed unit vector of dimension N+1. * constructs a Householder reflection from the vector, then applies it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner). -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void cmatrixrndorthogonal(const ae_int_t n, complex_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* Multiplication of MxN complex matrix by MxM random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - Q*A, where Q is random MxM orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void cmatrixrndorthogonalfromtheleft(complex_2d_array &a, const ae_int_t m, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Multiplication of MxN complex matrix by NxN random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void cmatrixrndorthogonalfromtheright(complex_2d_array &a, const ae_int_t m, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Generation of random NxN Hermitian matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void hmatrixrndcond(const ae_int_t n, const double c, complex_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* Hermitian multiplication of NxN matrix by random Haar distributed complex orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1] N - matrix size OUTPUT PARAMETERS: A - Q^H*A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void hmatrixrndmultiply(complex_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Generation of random NxN Hermitian positive definite matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random HPD matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void hpdmatrixrndcond(const ae_int_t n, const double c, complex_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* Generation of random NxN matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void rmatrixrndcond(const ae_int_t n, const double c, real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* Generation of a random uniformly distributed (Haar) orthogonal matrix INPUT PARAMETERS: N - matrix size, N>=1 OUTPUT PARAMETERS: A - orthogonal NxN matrix, array[0..N-1,0..N-1] NOTE: this function uses algorithm described in Stewart, G. W. (1980), "The Efficient Generation of Random Orthogonal Matrices with an Application to Condition Estimators". Speaking short, to generate an (N+1)x(N+1) orthogonal matrix, it: * takes an NxN one * takes uniformly distributed unit vector of dimension N+1. * constructs a Householder reflection from the vector, then applies it to the smaller matrix (embedded in the larger size with a 1 at the bottom right corner). -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void rmatrixrndorthogonal(const ae_int_t n, real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* Multiplication of MxN matrix by MxM random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - Q*A, where Q is random MxM orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void rmatrixrndorthogonalfromtheleft(real_2d_array &a, const ae_int_t m, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Multiplication of MxN matrix by NxN random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..M-1, 0..N-1] M, N- matrix size OUTPUT PARAMETERS: A - A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void rmatrixrndorthogonalfromtheright(real_2d_array &a, const ae_int_t m, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Generation of random NxN symmetric matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void smatrixrndcond(const ae_int_t n, const double c, real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* Symmetric multiplication of NxN matrix by random Haar distributed orthogonal matrix INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1] N - matrix size OUTPUT PARAMETERS: A - Q'*A*Q, where Q is random NxN orthogonal matrix -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void smatrixrndmultiply(real_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Generation of random NxN symmetric positive definite matrix with given condition number and norm2(A)=1 INPUT PARAMETERS: N - matrix size C - condition number (in 2-norm) OUTPUT PARAMETERS: A - random SPD matrix with norm2(A)=1 and cond(A)=C -- ALGLIB routine -- 04.12.2009 Bochkanov Sergey *************************************************************************/
void spdmatrixrndcond(const ae_int_t n, const double c, real_2d_array &a, const xparams _xparams = alglib::xdefault);
matinvreport
cmatrixinverse
cmatrixluinverse
cmatrixtrinverse
hpdmatrixcholeskyinverse
hpdmatrixinverse
rmatrixinverse
rmatrixluinverse
rmatrixtrinverse
spdmatrixcholeskyinverse
spdmatrixinverse
matinv_d_c1 Complex matrix inverse
matinv_d_hpd1 HPD matrix inverse
matinv_d_r1 Real matrix inverse
matinv_d_spd1 SPD matrix inverse
/************************************************************************* Matrix inverse report: * terminationtype completion code: * 1 for success * -3 for failure due to the matrix being singular or nearly-singular * r1 reciprocal of condition number in 1-norm * rinf reciprocal of condition number in inf-norm *************************************************************************/
class matinvreport { public: matinvreport(); matinvreport(const matinvreport &rhs); matinvreport& operator=(const matinvreport &rhs); virtual ~matinvreport(); ae_int_t terminationtype; double r1; double rinf; };
/************************************************************************* Inversion of a general matrix. Input parameters: A - matrix N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) Output parameters: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, contains matrix inverse * for rep.terminationtype<0, zero-filled Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
void cmatrixinverse(complex_2d_array &a, const ae_int_t n, matinvreport &rep, const xparams _xparams = alglib::xdefault); void cmatrixinverse(complex_2d_array &a, matinvreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Inversion of a matrix given by its LU decomposition. INPUT PARAMETERS: A - LU decomposition of the matrix (output of CMatrixLU subroutine). Pivots - table of permutations (the output of CMatrixLU subroutine). N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) OUTPUT PARAMETERS: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, contains matrix inverse * for rep.terminationtype<0, zero-filled Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 05.02.2010 Bochkanov Sergey *************************************************************************/
void cmatrixluinverse(complex_2d_array &a, const integer_1d_array &pivots, const ae_int_t n, matinvreport &rep, const xparams _xparams = alglib::xdefault); void cmatrixluinverse(complex_2d_array &a, const integer_1d_array &pivots, matinvreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Triangular matrix inverse (complex) The subroutine inverts the following types of matrices: * upper triangular * upper triangular with unit diagonal * lower triangular * lower triangular with unit diagonal In case of an upper (lower) triangular matrix, the inverse matrix will also be upper (lower) triangular, and after the end of the algorithm, the inverse matrix replaces the source matrix. The elements below (above) the main diagonal are not changed by the algorithm. If the matrix has a unit diagonal, the inverse matrix also has a unit diagonal, and the diagonal elements are not passed to the algorithm. INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1]. N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) IsUpper - True, if the matrix is upper triangular. IsUnit - diagonal type (optional): * if True, matrix has unit diagonal (a[i,i] are NOT used) * if False, matrix diagonal is arbitrary * if not given, False is assumed OUTPUT PARAMETERS: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, contains matrix inverse * for rep.terminationtype<0, zero-filled Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 05.02.2010 by Bochkanov Sergey *************************************************************************/
void cmatrixtrinverse(complex_2d_array &a, const ae_int_t n, const bool isupper, const bool isunit, matinvreport &rep, const xparams _xparams = alglib::xdefault); void cmatrixtrinverse(complex_2d_array &a, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inversion of a Hermitian positive definite matrix which is given by Cholesky decomposition. Input parameters: A - Cholesky decomposition of the matrix to be inverted: A=U'*U or A = L*L'. Output of HPDMatrixCholesky subroutine. N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) IsUpper - storage type: * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by function OUTPUT PARAMETERS: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, contains matrix inverse * for rep.terminationtype<0, zero-filled Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/
void hpdmatrixcholeskyinverse(complex_2d_array &a, const ae_int_t n, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault); void hpdmatrixcholeskyinverse(complex_2d_array &a, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inversion of a Hermitian positive definite matrix. Given an upper or lower triangle of a Hermitian positive definite matrix, the algorithm generates matrix A^-1 and saves the upper or lower triangle depending on the input. INPUT PARAMETERS: A - matrix to be inverted (upper or lower triangle), array[N,N] N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) IsUpper - storage type: * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by function OUTPUT PARAMETERS: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, contains matrix inverse * for rep.terminationtype<0, zero-filled Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/
void hpdmatrixinverse(complex_2d_array &a, const ae_int_t n, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault); void hpdmatrixinverse(complex_2d_array &a, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Inversion of a general matrix. INPUT PARAMETERS: A - matrix. N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) OUTPUT PARAMETERS: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, contains matrix inverse * for rep.terminationtype<0, zero-filled Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 2005-2010 by Bochkanov Sergey *************************************************************************/
void rmatrixinverse(real_2d_array &a, const ae_int_t n, matinvreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixinverse(real_2d_array &a, matinvreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Inversion of a matrix given by its LU decomposition. INPUT PARAMETERS: A - LU decomposition of the matrix (output of RMatrixLU subroutine). Pivots - table of permutations (the output of RMatrixLU subroutine). N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) OUTPUT PARAMETERS: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, contains matrix inverse * for rep.terminationtype<0, zero-filled Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 05.02.2010 Bochkanov Sergey *************************************************************************/
void rmatrixluinverse(real_2d_array &a, const integer_1d_array &pivots, const ae_int_t n, matinvreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixluinverse(real_2d_array &a, const integer_1d_array &pivots, matinvreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Triangular matrix inverse (real) The subroutine inverts the following types of matrices: * upper triangular * upper triangular with unit diagonal * lower triangular * lower triangular with unit diagonal In case of an upper (lower) triangular matrix, the inverse matrix will also be upper (lower) triangular, and after the end of the algorithm, the inverse matrix replaces the source matrix. The elements below (above) the main diagonal are not changed by the algorithm. If the matrix has a unit diagonal, the inverse matrix also has a unit diagonal, and the diagonal elements are not passed to the algorithm. INPUT PARAMETERS: A - matrix, array[0..N-1, 0..N-1]. N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) IsUpper - True, if the matrix is upper triangular. IsUnit - diagonal type (optional): * if True, matrix has unit diagonal (a[i,i] are NOT used) * if False, matrix diagonal is arbitrary * if not given, False is assumed OUTPUT PARAMETERS: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, contains matrix inverse * for rep.terminationtype<0, zero-filled Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 05.02.2010 by Bochkanov Sergey *************************************************************************/
void rmatrixtrinverse(real_2d_array &a, const ae_int_t n, const bool isupper, const bool isunit, matinvreport &rep, const xparams _xparams = alglib::xdefault); void rmatrixtrinverse(real_2d_array &a, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inversion of a symmetric positive definite matrix which is given by Cholesky decomposition. INPUT PARAMETERS: A - Cholesky decomposition of the matrix to be inverted: A=U'*U or A = L*L'. Output of SPDMatrixCholesky subroutine. N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) IsUpper - storage type: * if True, the symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by the function * if False, the symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by the function OUTPUT PARAMETERS: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, corresponding triangle contains inverse matrix, the other triangle is not modified. * for rep.terminationtype<0, corresponding triangle is zero-filled; the other triangle is not modified. Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/
void spdmatrixcholeskyinverse(real_2d_array &a, const ae_int_t n, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault); void spdmatrixcholeskyinverse(real_2d_array &a, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inversion of a symmetric positive definite matrix. Given an upper or lower triangle of a symmetric positive definite matrix, the algorithm generates matrix A^-1 and saves the upper or lower triangle depending on the input. INPUT PARAMETERS: A - matrix to be inverted (upper or lower triangle), array[N,N] N - size of the matrix A (optional): * if given, only principal NxN submatrix is processed and overwritten. Trailing elements are unchanged. * if not given, the size is automatically determined from the matrix size (A must be a square matrix) IsUpper - storage type: * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used/changed by function * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used/changed by function OUTPUT PARAMETERS: A - inverse of matrix A, array[N,N]: * for rep.terminationtype>0, contains matrix inverse * for rep.terminationtype<0, zero-filled Rep - solver report: * rep.terminationtype>0 for success, <0 for failure * see below for more info SOLVER REPORT Subroutine sets following fields of the Rep structure: * terminationtype completion code: * 1 for success * -3 for a singular or extremely ill-conditioned matrix * r1 reciprocal of condition number: 1/cond(A), 1-norm. * rinf reciprocal of condition number: 1/cond(A), inf-norm. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 10.02.2010 Bochkanov Sergey *************************************************************************/
void spdmatrixinverse(real_2d_array &a, const ae_int_t n, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault); void spdmatrixinverse(real_2d_array &a, const bool isupper, matinvreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        complex_2d_array a = "[[1i,-1],[1i,1]]";
        matinvreport rep;
        cmatrixinverse(a, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", a.tostring(4).c_str()); // EXPECTED: [[-0.5i,-0.5i],[-0.5,0.5]]
        printf("%.4f\n", double(rep.r1)); // EXPECTED: 0.5
        printf("%.4f\n", double(rep.rinf)); // EXPECTED: 0.5
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        complex_2d_array a = "[[2,1],[1,2]]";
        matinvreport rep;

        //
        // The matrix is given by its upper and lower triangles
        //
        //     [ 2 1 ]
        //     [ 1 2 ]
        //
        // However, hpdmatrixinverse() accepts and modifies only one triangle - either
        // the upper or the lower one. The other triangle is left untouched. In this example
        // we modify the lower triangle. Thus, the inverse matrix is
        //
        //     [  2/3 -1/3 ]
        //     [ -1/3  2/3 ]
        //
        // but only lower triangle is returned, and the upper triangle is not modified:
        //
        //     [  2/3   1  ]
        //     [ -1/3  2/3 ]
        //
        //
        bool isupper = false;
        hpdmatrixinverse(a, isupper, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", a.tostring(4).c_str()); // EXPECTED: [[0.666666,1],[-0.333333,0.666666]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_2d_array a = "[[1,-1],[1,1]]";
        matinvreport rep;
        rmatrixinverse(a, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", a.tostring(4).c_str()); // EXPECTED: [[0.5,0.5],[-0.5,0.5]]
        printf("%.4f\n", double(rep.r1)); // EXPECTED: 0.5
        printf("%.4f\n", double(rep.rinf)); // EXPECTED: 0.5
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_2d_array a = "[[2,1],[1,2]]";
        matinvreport rep;

        //
        // The matrix is given by its upper and lower triangles
        //
        //     [ 2 1 ]
        //     [ 1 2 ]
        //
        // However, spdmatrixinverse() accepts and modifies only one triangle - either
        // the upper or the lower one. The other triangle is left untouched. In this example
        // we modify the lower triangle. Thus, the inverse matrix is
        //
        //     [  2/3 -1/3 ]
        //     [ -1/3  2/3 ]
        //
        // but only lower triangle is returned, and the upper triangle is not modified:
        //
        //     [  2/3   1  ]
        //     [ -1/3  2/3 ]
        //
        //
        bool isupper = false;
        spdmatrixinverse(a, isupper, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        printf("%s\n", a.tostring(4).c_str()); // EXPECTED: [[0.666666,1],[-0.333333,0.666666]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

mcpdreport
mcpdstate
mcpdaddbc
mcpdaddec
mcpdaddtrack
mcpdcreate
mcpdcreateentry
mcpdcreateentryexit
mcpdcreateexit
mcpdresults
mcpdsetbc
mcpdsetec
mcpdsetlc
mcpdsetpredictionweights
mcpdsetprior
mcpdsettikhonovregularizer
mcpdsolve
mcpd_simple1 Simple unconstrained MCPD model (no entry/exit states)
mcpd_simple2 Simple MCPD model (no entry/exit states) with equality constraints
/************************************************************************* This structure is a MCPD training report: InnerIterationsCount - number of inner iterations of the underlying optimization algorithm OuterIterationsCount - number of outer iterations of the underlying optimization algorithm NFEV - number of merit function evaluations TerminationType - termination type (same as for MinBLEIC optimizer, positive values denote success, negative ones - failure) -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
class mcpdreport { public: mcpdreport(); mcpdreport(const mcpdreport &rhs); mcpdreport& operator=(const mcpdreport &rhs); virtual ~mcpdreport(); ae_int_t inneriterationscount; ae_int_t outeriterationscount; ae_int_t nfev; ae_int_t terminationtype; };
/************************************************************************* This structure is a MCPD (Markov Chains for Population Data) solver. You should use ALGLIB functions in order to work with this object. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
class mcpdstate { public: mcpdstate(); mcpdstate(const mcpdstate &rhs); mcpdstate& operator=(const mcpdstate &rhs); virtual ~mcpdstate(); };
/************************************************************************* This function is used to add bound constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to ADD bound constraint for one element of P without changing constraints for other elements. You can also use MCPDSetBC() function which allows to place bound constraints on arbitrary subset of elements of P. Set of constraints is specified by BndL/BndU matrices, which may contain arbitrary combination of finite numbers or infinities (like -INF<x<=0.5 or 0.1<=x<+INF). These functions (MCPDSetBC and MCPDAddBC) interact as follows: * there is internal matrix of bound constraints which is stored in the MCPD solver * MCPDSetBC() replaces this matrix by another one (SET) * MCPDAddBC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddBC() call preserves all modifications done by previous calls, while MCPDSetBC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver I - row index of element being constrained J - column index of element being constrained BndL - lower bound BndU - upper bound -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdaddbc(mcpdstate &s, const ae_int_t i, const ae_int_t j, const double bndl, const double bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to add equality constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to ADD equality constraint for one element of P without changing constraints for other elements. You can also use MCPDSetEC() function which allows you to specify arbitrary set of equality constraints in one call. These functions (MCPDSetEC and MCPDAddEC) interact as follows: * there is internal matrix of equality constraints which is stored in the MCPD solver * MCPDSetEC() replaces this matrix by another one (SET) * MCPDAddEC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddEC() call preserves all modifications done by previous calls, while MCPDSetEC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver I - row index of element being constrained J - column index of element being constrained C - value (constraint for P[I,J]). Can be either NAN (no constraint) or finite value from [0,1]. NOTES: 1. infinite values of C will lead to exception being thrown. Values less than 0.0 or greater than 1.0 will lead to error code being returned after call to MCPDSolve(). -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdaddec(mcpdstate &s, const ae_int_t i, const ae_int_t j, const double c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function is used to add a track - sequence of system states at the different moments of its evolution. You may add one or several tracks to the MCPD solver. In case you have several tracks, they won't overwrite each other. For example, if you pass two tracks, A1-A2-A3 (system at t=A+1, t=A+2 and t=A+3) and B1-B2-B3, then solver will try to model transitions from t=A+1 to t=A+2, t=A+2 to t=A+3, t=B+1 to t=B+2, t=B+2 to t=B+3. But it WONT mix these two tracks - i.e. it wont try to model transition from t=A+3 to t=B+1. INPUT PARAMETERS: S - solver XY - track, array[K,N]: * I-th row is a state at t=I * elements of XY must be non-negative (exception will be thrown on negative elements) K - number of points in a track * if given, only leading K rows of XY are used * if not given, automatically determined from size of XY NOTES: 1. Track may contain either proportional or population data: * with proportional data all rows of XY must sum to 1.0, i.e. we have proportions instead of absolute population values * with population data rows of XY contain population counts and generally do not sum to 1.0 (although they still must be non-negative) -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdaddtrack(mcpdstate &s, const real_2d_array &xy, const ae_int_t k, const xparams _xparams = alglib::xdefault); void mcpdaddtrack(mcpdstate &s, const real_2d_array &xy, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* DESCRIPTION: This function creates MCPD (Markov Chains for Population Data) solver. This solver can be used to find transition matrix P for N-dimensional prediction problem where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional population vectors (components of each X are non-negative), and P is a N*N transition matrix (elements of P are non-negative, each column sums to 1.0). Such models arise when when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is constant, i.e. there is no new individuals and no one leaves population * you want to model transitions of individuals from one state into another USAGE: Here we give very brief outline of the MCPD. We strongly recommend you to read examples in the ALGLIB Reference Manual and to read ALGLIB User Guide on data analysis which is available at http://www.alglib.net/dataanalysis/ 1. User initializes algorithm state with MCPDCreate() call 2. User adds one or more tracks - sequences of states which describe evolution of a system being modelled from different starting conditions 3. User may add optional boundary, equality and/or linear constraints on the coefficients of P by calling one of the following functions: * MCPDSetEC() to set equality constraints * MCPDSetBC() to set bound constraints * MCPDSetLC() to set linear constraints 4. Optionally, user may set custom weights for prediction errors (by default, algorithm assigns non-equal, automatically chosen weights for errors in the prediction of different components of X). It can be done with a call of MCPDSetPredictionWeights() function. 5. User calls MCPDSolve() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 6. User calls MCPDResults() to get solution INPUT PARAMETERS: N - problem dimension, N>=1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdcreate(const ae_int_t n, mcpdstate &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* DESCRIPTION: This function is a specialized version of MCPDCreate() function, and we recommend you to read comments for this function for general information about MCPD solver. This function creates MCPD (Markov Chains for Population Data) solver for "Entry-state" model, i.e. model where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional state vectors P is a N*N transition matrix and one selected component of X[] is called "entry" state and is treated in a special way: system state always transits from "entry" state to some another state system state can not transit from any state into "entry" state Such conditions basically mean that row of P which corresponds to "entry" state is zero. Such models arise when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is NOT constant - at every moment of time there is some (unpredictable) amount of "new" individuals, which can transit into one of the states at the next turn, but still no one leaves population * you want to model transitions of individuals from one state into another * but you do NOT want to predict amount of "new" individuals because it does not depends on individuals already present (hence system can not transit INTO entry state - it can only transit FROM it). This model is discussed in more details in the ALGLIB User Guide (see http://www.alglib.net/dataanalysis/ for more data). INPUT PARAMETERS: N - problem dimension, N>=2 EntryState- index of entry state, in 0..N-1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdcreateentry(const ae_int_t n, const ae_int_t entrystate, mcpdstate &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* DESCRIPTION: This function is a specialized version of MCPDCreate() function, and we recommend you to read comments for this function for general information about MCPD solver. This function creates MCPD (Markov Chains for Population Data) solver for "Entry-Exit-states" model, i.e. model where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional state vectors P is a N*N transition matrix one selected component of X[] is called "entry" state and is treated in a special way: system state always transits from "entry" state to some another state system state can not transit from any state into "entry" state and another one component of X[] is called "exit" state and is treated in a special way too: system state can transit from any state into "exit" state system state can not transit from "exit" state into any other state transition operator discards "exit" state (makes it zero at each turn) Such conditions basically mean that: row of P which corresponds to "entry" state is zero column of P which corresponds to "exit" state is zero Multiplication by such P may decrease sum of vector components. Such models arise when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is NOT constant * at every moment of time there is some (unpredictable) amount of "new" individuals, which can transit into one of the states at the next turn * some individuals can move (predictably) into "exit" state and leave population at the next turn * you want to model transitions of individuals from one state into another, including transitions from the "entry" state and into the "exit" state. * but you do NOT want to predict amount of "new" individuals because it does not depends on individuals already present (hence system can not transit INTO entry state - it can only transit FROM it). This model is discussed in more details in the ALGLIB User Guide (see http://www.alglib.net/dataanalysis/ for more data). INPUT PARAMETERS: N - problem dimension, N>=2 EntryState- index of entry state, in 0..N-1 ExitState- index of exit state, in 0..N-1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdcreateentryexit(const ae_int_t n, const ae_int_t entrystate, const ae_int_t exitstate, mcpdstate &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* DESCRIPTION: This function is a specialized version of MCPDCreate() function, and we recommend you to read comments for this function for general information about MCPD solver. This function creates MCPD (Markov Chains for Population Data) solver for "Exit-state" model, i.e. model where transition from X[i] to X[i+1] is modelled as X[i+1] = P*X[i] where X[i] and X[i+1] are N-dimensional state vectors P is a N*N transition matrix and one selected component of X[] is called "exit" state and is treated in a special way: system state can transit from any state into "exit" state system state can not transit from "exit" state into any other state transition operator discards "exit" state (makes it zero at each turn) Such conditions basically mean that column of P which corresponds to "exit" state is zero. Multiplication by such P may decrease sum of vector components. Such models arise when: * there is some population of individuals * individuals can have different states * individuals can transit from one state to another * population size is NOT constant - individuals can move into "exit" state and leave population at the next turn, but there are no new individuals * amount of individuals which leave population can be predicted * you want to model transitions of individuals from one state into another (including transitions into the "exit" state) This model is discussed in more details in the ALGLIB User Guide (see http://www.alglib.net/dataanalysis/ for more data). INPUT PARAMETERS: N - problem dimension, N>=2 ExitState- index of exit state, in 0..N-1 OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdcreateexit(const ae_int_t n, const ae_int_t exitstate, mcpdstate &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* MCPD results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: P - array[N,N], transition matrix Rep - optimization report. You should check Rep.TerminationType in order to distinguish successful termination from unsuccessful one. Speaking short, positive values denote success, negative ones are failures. More information about fields of this structure can be found in the comments on MCPDReport datatype. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdresults(const mcpdstate &s, real_2d_array &p, mcpdreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function is used to add bound constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to place bound constraints on arbitrary subset of elements of P. Set of constraints is specified by BndL/BndU matrices, which may contain arbitrary combination of finite numbers or infinities (like -INF<x<=0.5 or 0.1<=x<+INF). You can also use MCPDAddBC() function which allows to ADD bound constraint for one element of P without changing constraints for other elements. These functions (MCPDSetBC and MCPDAddBC) interact as follows: * there is internal matrix of bound constraints which is stored in the MCPD solver * MCPDSetBC() replaces this matrix by another one (SET) * MCPDAddBC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddBC() call preserves all modifications done by previous calls, while MCPDSetBC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver BndL - lower bounds constraints, array[N,N]. Elements of BndL can be finite numbers or -INF. BndU - upper bounds constraints, array[N,N]. Elements of BndU can be finite numbers or +INF. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdsetbc(mcpdstate &s, const real_2d_array &bndl, const real_2d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to add equality constraints on the elements of the transition matrix P. MCPD solver has four types of constraints which can be placed on P: * user-specified equality constraints (optional) * user-specified bound constraints (optional) * user-specified general linear constraints (optional) * basic constraints (always present): * non-negativity: P[i,j]>=0 * consistency: every column of P sums to 1.0 Final constraints which are passed to the underlying optimizer are calculated as intersection of all present constraints. For example, you may specify boundary constraint on P[0,0] and equality one: 0.1<=P[0,0]<=0.9 P[0,0]=0.5 Such combination of constraints will be silently reduced to their intersection, which is P[0,0]=0.5. This function can be used to place equality constraints on arbitrary subset of elements of P. Set of constraints is specified by EC, which may contain either NAN's or finite numbers from [0,1]. NAN denotes absence of constraint, finite number denotes equality constraint on specific element of P. You can also use MCPDAddEC() function which allows to ADD equality constraint for one element of P without changing constraints for other elements. These functions (MCPDSetEC and MCPDAddEC) interact as follows: * there is internal matrix of equality constraints which is stored in the MCPD solver * MCPDSetEC() replaces this matrix by another one (SET) * MCPDAddEC() modifies one element of this matrix and leaves other ones unchanged (ADD) * thus MCPDAddEC() call preserves all modifications done by previous calls, while MCPDSetEC() completely discards all changes done to the equality constraints. INPUT PARAMETERS: S - solver EC - equality constraints, array[N,N]. Elements of EC can be either NAN's or finite numbers from [0,1]. NAN denotes absence of constraints, while finite value denotes equality constraint on the corresponding element of P. NOTES: 1. infinite values of EC will lead to exception being thrown. Values less than 0.0 or greater than 1.0 will lead to error code being returned after call to MCPDSolve(). -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdsetec(mcpdstate &s, const real_2d_array &ec, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to set linear equality/inequality constraints on the elements of the transition matrix P. This function can be used to set one or several general linear constraints on the elements of P. Two types of constraints are supported: * equality constraints * inequality constraints (both less-or-equal and greater-or-equal) Coefficients of constraints are specified by matrix C (one of the parameters). One row of C corresponds to one constraint. Because transition matrix P has N*N elements, we need N*N columns to store all coefficients (they are stored row by row), and one more column to store right part - hence C has N*N+1 columns. Constraint kind is stored in the CT array. Thus, I-th linear constraint is P[0,0]*C[I,0] + P[0,1]*C[I,1] + .. + P[0,N-1]*C[I,N-1] + + P[1,0]*C[I,N] + P[1,1]*C[I,N+1] + ... + + P[N-1,N-1]*C[I,N*N-1] ?=? C[I,N*N] where ?=? can be either "=" (CT[i]=0), "<=" (CT[i]<0) or ">=" (CT[i]>0). Your constraint may involve only some subset of P (less than N*N elements). For example it can be something like P[0,0] + P[0,1] = 0.5 In this case you still should pass matrix with N*N+1 columns, but all its elements (except for C[0,0], C[0,1] and C[0,N*N-1]) will be zero. INPUT PARAMETERS: S - solver C - array[K,N*N+1] - coefficients of constraints (see above for complete description) CT - array[K] - constraint types (see above for complete description) K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdsetlc(mcpdstate &s, const real_2d_array &c, const integer_1d_array &ct, const ae_int_t k, const xparams _xparams = alglib::xdefault); void mcpdsetlc(mcpdstate &s, const real_2d_array &c, const integer_1d_array &ct, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to change prediction weights MCPD solver scales prediction errors as follows Error(P) = ||W*(y-P*x)||^2 where x is a system state at time t y is a system state at time t+1 P is a transition matrix W is a diagonal scaling matrix By default, weights are chosen in order to minimize relative prediction error instead of absolute one. For example, if one component of state is about 0.5 in magnitude and another one is about 0.05, then algorithm will make corresponding weights equal to 2.0 and 20.0. INPUT PARAMETERS: S - solver PW - array[N], weights: * must be non-negative values (exception will be thrown otherwise) * zero values will be replaced by automatically chosen values -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdsetpredictionweights(mcpdstate &s, const real_1d_array &pw, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function allows to set prior values used for regularization of your problem. By default, regularizing term is equal to r*||P-prior_P||^2, where r is a small non-zero value, P is transition matrix, prior_P is identity matrix, ||X||^2 is a sum of squared elements of X. This function allows you to change prior values prior_P. You can also change r with MCPDSetTikhonovRegularizer() function. INPUT PARAMETERS: S - solver PP - array[N,N], matrix of prior values: 1. elements must be real numbers from [0,1] 2. columns must sum to 1.0. First property is checked (exception is thrown otherwise), while second one is not checked/enforced. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdsetprior(mcpdstate &s, const real_2d_array &pp, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function allows to tune amount of Tikhonov regularization being applied to your problem. By default, regularizing term is equal to r*||P-prior_P||^2, where r is a small non-zero value, P is transition matrix, prior_P is identity matrix, ||X||^2 is a sum of squared elements of X. This function allows you to change coefficient r. You can also change prior values with MCPDSetPrior() function. INPUT PARAMETERS: S - solver V - regularization coefficient, finite non-negative value. It is not recommended to specify zero value unless you are pretty sure that you want it. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdsettikhonovregularizer(mcpdstate &s, const double v, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to start solution of the MCPD problem. After return from this function, you can use MCPDResults() to get solution and completion code. -- ALGLIB -- Copyright 23.05.2010 by Bochkanov Sergey *************************************************************************/
void mcpdsolve(mcpdstate &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // The very simple MCPD example
        //
        // We have a loan portfolio. Our loans can be in one of two states:
        // * normal loans ("good" ones)
        // * past due loans ("bad" ones)
        //
        // We assume that:
        // * loans can transition from any state to any other state. In 
        //   particular, past due loan can become "good" one at any moment 
        //   with same (fixed) probability. Not realistic, but it is toy example :)
        // * portfolio size does not change over time
        //
        // Thus, we have following model
        //     state_new = P*state_old
        // where
        //         ( p00  p01 )
        //     P = (          )
        //         ( p10  p11 )
        //
        // We want to model transitions between these two states using MCPD
        // approach (Markov Chains for Proportional/Population Data), i.e.
        // to restore hidden transition matrix P using actual portfolio data.
        // We have:
        // * poportional data, i.e. proportion of loans in the normal and past 
        //   due states (not portfolio size measured in some currency, although 
        //   it is possible to work with population data too)
        // * two tracks, i.e. two sequences which describe portfolio
        //   evolution from two different starting states: [1,0] (all loans 
        //   are "good") and [0.8,0.2] (only 80% of portfolio is in the "good"
        //   state)
        //
        mcpdstate s;
        mcpdreport rep;
        real_2d_array p;
        real_2d_array track0 = "[[1.00000,0.00000],[0.95000,0.05000],[0.92750,0.07250],[0.91738,0.08263],[0.91282,0.08718]]";
        real_2d_array track1 = "[[0.80000,0.20000],[0.86000,0.14000],[0.88700,0.11300],[0.89915,0.10085]]";

        mcpdcreate(2, s);
        mcpdaddtrack(s, track0);
        mcpdaddtrack(s, track1);
        mcpdsolve(s);
        mcpdresults(s, p, rep);

        //
        // Hidden matrix P is equal to
        //         ( 0.95  0.50 )
        //         (            )
        //         ( 0.05  0.50 )
        // which means that "good" loans can become "bad" with 5% probability, 
        // while "bad" loans will return to good state with 50% probability.
        //
        printf("%s\n", p.tostring(2).c_str()); // EXPECTED: [[0.95,0.50],[0.05,0.50]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Simple MCPD example
        //
        // We have a loan portfolio. Our loans can be in one of three states:
        // * normal loans
        // * past due loans
        // * charged off loans
        //
        // We assume that:
        // * normal loan can stay normal or become past due (but not charged off)
        // * past due loan can stay past due, become normal or charged off
        // * charged off loan will stay charged off for the rest of eternity
        // * portfolio size does not change over time
        // Not realistic, but it is toy example :)
        //
        // Thus, we have following model
        //     state_new = P*state_old
        // where
        //         ( p00  p01    )
        //     P = ( p10  p11    )
        //         (      p21  1 )
        // i.e. four elements of P are known a priori.
        //
        // Although it is possible (given enough data) to In order to enforce 
        // this property we set equality constraints on these elements.
        //
        // We want to model transitions between these two states using MCPD
        // approach (Markov Chains for Proportional/Population Data), i.e.
        // to restore hidden transition matrix P using actual portfolio data.
        // We have:
        // * poportional data, i.e. proportion of loans in the current and past 
        //   due states (not portfolio size measured in some currency, although 
        //   it is possible to work with population data too)
        // * two tracks, i.e. two sequences which describe portfolio
        //   evolution from two different starting states: [1,0,0] (all loans 
        //   are "good") and [0.8,0.2,0.0] (only 80% of portfolio is in the "good"
        //   state)
        //
        mcpdstate s;
        mcpdreport rep;
        real_2d_array p;
        real_2d_array track0 = "[[1.000000,0.000000,0.000000],[0.950000,0.050000,0.000000],[0.927500,0.060000,0.012500],[0.911125,0.061375,0.027500],[0.896256,0.060900,0.042844]]";
        real_2d_array track1 = "[[0.800000,0.200000,0.000000],[0.860000,0.090000,0.050000],[0.862000,0.065500,0.072500],[0.851650,0.059475,0.088875],[0.838805,0.057451,0.103744]]";

        mcpdcreate(3, s);
        mcpdaddtrack(s, track0);
        mcpdaddtrack(s, track1);
        mcpdaddec(s, 0, 2, 0.0);
        mcpdaddec(s, 1, 2, 0.0);
        mcpdaddec(s, 2, 2, 1.0);
        mcpdaddec(s, 2, 0, 0.0);
        mcpdsolve(s);
        mcpdresults(s, p, rep);

        //
        // Hidden matrix P is equal to
        //         ( 0.95 0.50      )
        //         ( 0.05 0.25      )
        //         (      0.25 1.00 ) 
        // which means that "good" loans can become past due with 5% probability, 
        // while past due loans will become charged off with 25% probability or
        // return back to normal state with 50% probability.
        //
        printf("%s\n", p.tostring(2).c_str()); // EXPECTED: [[0.95,0.50,0.00],[0.05,0.25,0.00],[0.00,0.25,1.00]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minbcreport
minbcstate
minbccreate
minbccreatef
minbciteration
minbcoptguardgradient
minbcoptguardnonc1test0results
minbcoptguardnonc1test1results
minbcoptguardresults
minbcoptguardsmoothness
minbcoptimize
minbcrequesttermination
minbcrestartfrom
minbcresults
minbcresultsbuf
minbcsetbc
minbcsetcond
minbcsetprecdefault
minbcsetprecdiag
minbcsetprecscale
minbcsetscale
minbcsetstpmax
minbcsetxrep
minbc_d_1 Nonlinear optimization with box constraints
minbc_numdiff Nonlinear optimization with bound constraints and numerical differentiation
/************************************************************************* This structure stores optimization report: * iterationscount number of iterations * nfev number of gradient evaluations * terminationtype termination type (see below) TERMINATION CODES terminationtype field contains completion code, which can be: -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. -3 inconsistent constraints. 1 relative function improvement is no more than EpsF. 2 relative step is no more than EpsX. 4 gradient norm is no more than EpsG 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. 8 terminated by user who called minbcrequesttermination(). X contains point which was "current accepted" when termination request was submitted. *************************************************************************/
class minbcreport { public: minbcreport(); minbcreport(const minbcreport &rhs); minbcreport& operator=(const minbcreport &rhs); virtual ~minbcreport(); ae_int_t iterationscount; ae_int_t nfev; ae_int_t varidx; ae_int_t terminationtype; };
/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinBC subpackage to work with this object *************************************************************************/
class minbcstate { public: minbcstate(); minbcstate(const minbcstate &rhs); minbcstate& operator=(const minbcstate &rhs); virtual ~minbcstate(); };
/************************************************************************* BOX CONSTRAINED OPTIMIZATION WITH FAST ACTIVATION OF MULTIPLE BOX CONSTRAINTS DESCRIPTION: The subroutine minimizes function F(x) of N arguments subject to box constraints (with some of box constraints actually being equality ones). This optimizer uses algorithm similar to that of MinBLEIC (optimizer with general linear constraints), but presence of box-only constraints allows us to use faster constraint activation strategies. On large-scale problems, with multiple constraints active at the solution, this optimizer can be several times faster than BLEIC. REQUIREMENTS: * user must provide function value and gradient * starting point X0 must be feasible or not too far away from the feasible set * grad(f) must be Lipschitz continuous on a level set: L = { x : f(x)<=f(x0) } * function must be defined everywhere on the feasible set F USAGE: Constrained optimization if far more complex than the unconstrained one. Here we give very brief outline of the BC optimizer. We strongly recommend you to read examples in the ALGLIB Reference Manual and to read ALGLIB User Guide on optimization, which is available at http://www.alglib.net/optimization/ 1. User initializes algorithm state with MinBCCreate() call 2. USer adds box constraints by calling MinBCSetBC() function. 3. User sets stopping conditions with MinBCSetCond(). 4. User calls MinBCOptimize() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 5. User calls MinBCResults() to get solution 6. Optionally user may call MinBCRestartFrom() to solve another problem with same N but another starting point. MinBCRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size ofX X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to find feasible point first, using X as initial approximation. OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbccreate(const ae_int_t n, const real_1d_array &x, minbcstate &state, const xparams _xparams = alglib::xdefault); void minbccreate(const real_1d_array &x, minbcstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* The subroutine is finite difference variant of MinBCCreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on MinBCCreate() in order to get more information about creation of BC optimizer. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[0..N-1]. DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. algorithm uses 4-point central formula for differentiation. 2. differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is scaling vector which can be set by MinBCSetScale() call. 3. we recommend you to use moderate values of differentiation step. Too large step will result in too large truncation errors, while too small step will result in too large numerical errors. 1.0E-6 can be good value to start with. 4. Numerical differentiation is very inefficient - one gradient calculation needs 4*N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. CG needs exact gradient values. Imprecise gradient may slow down convergence, especially on highly nonlinear problems. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 16.05.2011 by Bochkanov Sergey *************************************************************************/
void minbccreatef(const ae_int_t n, const real_1d_array &x, const double diffstep, minbcstate &state, const xparams _xparams = alglib::xdefault); void minbccreatef(const real_1d_array &x, const double diffstep, minbcstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool minbciteration(minbcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function activates/deactivates verification of the user-supplied analytic gradient. Upon activation of this option OptGuard integrity checker performs numerical differentiation of your target function at the initial point (note: future versions may also perform check at the final point) and compares numerical gradient with analytic one provided by you. If difference is too large, an error flag is set and optimization session continues. After optimization session is over, you can retrieve the report which stores both gradients and specific components highlighted as suspicious by the OptGuard. The primary OptGuard report can be retrieved with minbcoptguardresults(). IMPORTANT: gradient check is a high-overhead option which will cost you about 3*N additional function evaluations. In many cases it may cost as much as the rest of the optimization session. YOU SHOULD NOT USE IT IN THE PRODUCTION CODE UNLESS YOU WANT TO CHECK DERIVATIVES PROVIDED BY SOME THIRD PARTY. NOTE: unlike previous incarnation of the gradient checking code, OptGuard does NOT interrupt optimization even if it discovers bad gradient. INPUT PARAMETERS: State - structure used to store algorithm state TestStep - verification step used for numerical differentiation: * TestStep=0 turns verification off * TestStep>0 activates verification You should carefully choose TestStep. Value which is too large (so large that function behavior is non- cubic at this scale) will lead to false alarms. Too short step will result in rounding errors dominating numerical derivative. You may use different step for different parameters by means of setting scale with minbcsetscale(). === EXPLANATION ========================================================== In order to verify gradient algorithm performs following steps: * two trial steps are made to X[i]-TestStep*S[i] and X[i]+TestStep*S[i], where X[i] is i-th component of the initial point and S[i] is a scale of i-th parameter * F(X) is evaluated at these trial points * we perform one more evaluation in the middle point of the interval * we build cubic model using function values and derivatives at trial points and we compare its prediction with actual value in the middle point -- ALGLIB -- Copyright 15.06.2014 by Bochkanov Sergey *************************************************************************/
void minbcoptguardgradient(minbcstate &state, const double teststep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #0 Nonsmoothness (non-C1) test #0 studies function values (not gradient!) obtained during line searches and monitors behavior of the directional derivative estimate. This test is less powerful than test #1, but it does not depend on the gradient values and thus it is more robust against artifacts introduced by numerical differentiation. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], f[] - arrays of length CNT which store step lengths and function values at these points; f[i] is evaluated in x0+stp[i]*d. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #0 "strong" report lngrep - C1 test #0 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minbcoptguardnonc1test0results(const minbcstate &state, optguardnonc1test0report &strrep, optguardnonc1test0report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #1 Nonsmoothness (non-C1) test #1 studies individual components of the gradient computed during line search. When precise analytic gradient is provided this test is more powerful than test #0 which works with function values and ignores user-provided gradient. However, test #0 becomes more powerful when numerical differentiation is employed (in such cases test #1 detects higher levels of numerical noise and becomes too conservative). This test also tells specific components of the gradient which violate C1 continuity, which makes it more informative than #0, which just tells that continuity is violated. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * vidx - is an index of the variable in [0,N) with nonsmooth derivative * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], g[] - arrays of length CNT which store step lengths and gradient values at these points; g[i] is evaluated in x0+stp[i]*d and contains vidx-th component of the gradient. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #1 "strong" report lngrep - C1 test #1 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minbcoptguardnonc1test1results(minbcstate &state, optguardnonc1test1report &strrep, optguardnonc1test1report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Results of OptGuard integrity check, should be called after optimization session is over. === PRIMARY REPORT ======================================================= OptGuard performs several checks which are intended to catch common errors in the implementation of nonlinear function/gradient: * incorrect analytic gradient * discontinuous (non-C0) target functions (constraints) * nonsmooth (non-C1) target functions (constraints) Each of these checks is activated with appropriate function: * minbcoptguardgradient() for gradient verification * minbcoptguardsmoothness() for C0/C1 checks Following flags are set when these errors are suspected: * rep.badgradsuspected, and additionally: * rep.badgradvidx for specific variable (gradient element) suspected * rep.badgradxbase, a point where gradient is tested * rep.badgraduser, user-provided gradient (stored as 2D matrix with single row in order to make report structure compatible with more complex optimizers like MinNLC or MinLM) * rep.badgradnum, reference gradient obtained via numerical differentiation (stored as 2D matrix with single row in order to make report structure compatible with more complex optimizers like MinNLC or MinLM) * rep.nonc0suspected * rep.nonc1suspected === ADDITIONAL REPORTS/LOGS ============================================== Several different tests are performed to catch C0/C1 errors, you can find out specific test signaled error by looking to: * rep.nonc0test0positive, for non-C0 test #0 * rep.nonc1test0positive, for non-C1 test #0 * rep.nonc1test1positive, for non-C1 test #1 Additional information (including line search logs) can be obtained by means of: * minbcoptguardnonc1test0results() * minbcoptguardnonc1test1results() which return detailed error reports, specific points where discontinuities were found, and so on. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: rep - generic OptGuard report; more detailed reports can be retrieved with other functions. NOTE: false negatives (nonsmooth problems are not identified as nonsmooth ones) are possible although unlikely. The reason is that you need to make several evaluations around nonsmoothness in order to accumulate enough information about function curvature. Say, if you start right from the nonsmooth point, optimizer simply won't get enough data to understand what is going wrong before it terminates due to abrupt changes in the derivative. It is also possible that "unlucky" step will move us to the termination too quickly. Our current approach is to have less than 0.1% false negatives in our test examples (measured with multiple restarts from random points), and to have exactly 0% false positives. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minbcoptguardresults(minbcstate &state, optguardreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function activates/deactivates nonsmoothness monitoring option of the OptGuard integrity checker. Smoothness monitor silently observes solution process and tries to detect ill-posed problems, i.e. ones with: a) discontinuous target function (non-C0) b) nonsmooth target function (non-C1) Smoothness monitoring does NOT interrupt optimization even if it suspects that your problem is nonsmooth. It just sets corresponding flags in the OptGuard report which can be retrieved after optimization is over. Smoothness monitoring is a moderate overhead option which often adds less than 1% to the optimizer running time. Thus, you can use it even for large scale problems. NOTE: OptGuard does NOT guarantee that it will always detect C0/C1 continuity violations. First, minor errors are hard to catch - say, a 0.0001 difference in the model values at two sides of the gap may be due to discontinuity of the model - or simply because the model has changed. Second, C1-violations are especially difficult to detect in a noninvasive way. The optimizer usually performs very short steps near the nonsmoothness, and differentiation usually introduces a lot of numerical noise. It is hard to tell whether some tiny discontinuity in the slope is due to real nonsmoothness or just due to numerical noise alone. Our top priority was to avoid false positives, so in some rare cases minor errors may went unnoticed (however, in most cases they can be spotted with restart from different initial point). INPUT PARAMETERS: state - algorithm state level - monitoring level: * 0 - monitoring is disabled * 1 - noninvasive low-overhead monitoring; function values and/or gradients are recorded, but OptGuard does not try to perform additional evaluations in order to get more information about suspicious locations. === EXPLANATION ========================================================== One major source of headache during optimization is the possibility of the coding errors in the target function/constraints (or their gradients). Such errors most often manifest themselves as discontinuity or nonsmoothness of the target/constraints. Another frequent situation is when you try to optimize something involving lots of min() and max() operations, i.e. nonsmooth target. Although not a coding error, it is nonsmoothness anyway - and smooth optimizers usually stop right after encountering nonsmoothness, well before reaching solution. OptGuard integrity checker helps you to catch such situations: it monitors function values/gradients being passed to the optimizer and tries to errors. Upon discovering suspicious pair of points it raises appropriate flag (and allows you to continue optimization). When optimization is done, you can study OptGuard result. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minbcoptguardsmoothness(minbcstate &state, const ae_int_t level, const xparams _xparams = alglib::xdefault); void minbcoptguardsmoothness(minbcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. This function has two different implementations: one which uses exact (analytical) user-supplied gradient, and one which uses function value only and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object (either MinBCCreate() for analytical gradient or MinBCCreateF() for numerical differentiation) you should choose appropriate variant of MinBCOptimize() - one which accepts function AND gradient or one which accepts function ONLY. Be careful to choose variant of MinBCOptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to MinBCOptimize() and specific function used to create optimizer. | USER PASSED TO MinBCOptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ MinBCCreateF() | works FAILS MinBCCreate() | FAILS works Here "FAIL" denotes inappropriate combinations of optimizer creation function and MinBCOptimize() version. Attemps to use such combination (for example, to create optimizer with MinBCCreateF() and to pass gradient information to MinCGOptimize()) will lead to exception being thrown. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbcoptimize(minbcstate &state, void (*func)(const real_1d_array &x, double &func, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void minbcoptimize(minbcstate &state, void (*grad)(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine submits request for termination of running optimizer. It should be called from user-supplied callback when user decides that it is time to "smoothly" terminate optimization process. As result, optimizer stops at point which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 08.10.2014 by Bochkanov Sergey *************************************************************************/
void minbcrequesttermination(minbcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts algorithm from new point. All optimization parameters (including constraints) are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure previously allocated with MinBCCreate call. X - new starting point. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbcrestartfrom(minbcstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* BC results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report. You should check Rep.TerminationType in order to distinguish successful termination from unsuccessful one: * -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. * -3 inconsistent constraints. * 1 relative function improvement is no more than EpsF. * 2 scaled step is no more than EpsX. * 4 scaled gradient norm is no more than EpsG. * 5 MaxIts steps was taken * 8 terminated by user who called minbcrequesttermination(). X contains point which was "current accepted" when termination request was submitted. More information about fields of this structure can be found in the comments on MinBCReport datatype. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbcresults(const minbcstate &state, real_1d_array &x, minbcreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* BC results Buffered implementation of MinBCResults() which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbcresultsbuf(const minbcstate &state, real_1d_array &x, minbcreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets boundary constraints for BC optimizer. Boundary constraints are inactive by default (after initial creation). They are preserved after algorithm restart with MinBCRestartFrom(). INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF. BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF. NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: this solver has following useful properties: * bound constraints are always satisfied exactly * function is evaluated only INSIDE area specified by bound constraints, even when numerical differentiation is used (algorithm adjusts nodes according to boundary constraints) -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbcsetbc(minbcstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets stopping conditions for the optimizer. INPUT PARAMETERS: State - structure which stores algorithm state EpsG - >=0 The subroutine finishes its work if the condition |v|<EpsG is satisfied, where: * |.| means Euclidian norm * v - scaled gradient vector, v[i]=g[i]*s[i] * g - gradient * s - scaling coefficients set by MinBCSetScale() EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - step vector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinBCSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsG=0, EpsF=0 and EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection. NOTE: when SetCond() called with non-zero MaxIts, BC solver may perform slightly more than MaxIts iterations. I.e., MaxIts sets non-strict limit on iterations count. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbcsetcond(minbcstate &state, const double epsg, const double epsf, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* Modification of the preconditioner: preconditioning is turned off. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minbcsetprecdefault(minbcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modification of the preconditioner: diagonal of approximate Hessian is used. INPUT PARAMETERS: State - structure which stores algorithm state D - diagonal of the approximate Hessian, array[0..N-1], (if larger, only leading N elements are used). NOTE 1: D[i] should be positive. Exception will be thrown otherwise. NOTE 2: you should pass diagonal of approximate Hessian - NOT ITS INVERSE. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minbcsetprecdiag(minbcstate &state, const real_1d_array &d, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modification of the preconditioner: scale-based diagonal preconditioning. This preconditioning mode can be useful when you don't have approximate diagonal of Hessian, but you know that your variables are badly scaled (for example, one variable is in [1,10], and another in [1000,100000]), and most part of the ill-conditioning comes from different scales of vars. In this case simple scale-based preconditioner, with H[i] = 1/(s[i]^2), can greatly improve convergence. IMPRTANT: you should set scale of your variables with MinBCSetScale() call (before or after MinBCSetPrecScale() call). Without knowledge of the scale of your variables scale-based preconditioner will be just unit matrix. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minbcsetprecscale(minbcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets scaling coefficients for BC optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Scaling is also used by finite difference variant of the optimizer - step along I-th axis is equal to DiffStep*S[I]. In most optimizers (and in the BC too) scaling is NOT a form of preconditioning. It just affects stopping conditions. You should set preconditioner by separate call to one of the MinBCSetPrec...() functions. There is a special preconditioning mode, however, which uses scaling coefficients to form diagonal preconditioning matrix. You can turn this mode on, if you want. But you should understand that scaling is not the same thing as preconditioning - these are two different, although related forms of tuning solver. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void minbcsetscale(minbcstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which lead to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minbcsetstpmax(minbcstate &state, const double stpmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinBCOptimize(). -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbcsetxrep(minbcstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_grad(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) 
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
    grad[0] = 400*pow(x[0]+3,3);
    grad[1] = 4*pow(x[1]-3,3);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x,y) = 100*(x+3)^4+(y-3)^4
        //
        // subject to box constraints
        //
        //     -1<=x<=+1, -1<=y<=+1
        //
        // using MinBC optimizer with:
        // * initial point x=[0,0]
        // * unit scale being set for all variables (see minbcsetscale for more info)
        // * stopping criteria set to "terminate after short enough step"
        // * OptGuard integrity check being used to check problem statement
        //   for some common errors like nonsmoothness or bad analytic gradient
        //
        // First, we create optimizer object and tune its properties:
        // * set box constraints
        // * set variable scales
        // * set stopping criteria
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[-1,-1]";
        real_1d_array bndu = "[+1,+1]";
        minbcstate state;
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        minbccreate(x, state);
        minbcsetbc(state, bndl, bndu);
        minbcsetscale(state, s);
        minbcsetcond(state, epsg, epsf, epsx, maxits);

        //
        // Then we activate OptGuard integrity checking.
        //
        // OptGuard monitor helps to catch common coding and problem statement
        // issues, like:
        // * discontinuity of the target function (C0 continuity violation)
        // * nonsmoothness of the target function (C1 continuity violation)
        // * erroneous analytic gradient, i.e. one inconsistent with actual
        //   change in the target/constraints
        //
        // OptGuard is essential for early prototyping stages because such
        // problems often result in premature termination of the optimizer
        // which is really hard to distinguish from the correct termination.
        //
        // IMPORTANT: GRADIENT VERIFICATION IS PERFORMED BY MEANS OF NUMERICAL
        //            DIFFERENTIATION. DO NOT USE IT IN PRODUCTION CODE!!!!!!!
        //
        //            Other OptGuard checks add moderate overhead, but anyway
        //            it is better to turn them off when they are not needed.
        //
        minbcoptguardsmoothness(state);
        minbcoptguardgradient(state, 0.001);

        //
        // Optimize and evaluate results
        //
        minbcreport rep;
        alglib::minbcoptimize(state, function1_grad);
        minbcresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-1,1]

        //
        // Check that OptGuard did not report errors
        //
        // NOTE: want to test OptGuard? Try breaking the gradient - say, add
        //       1.0 to some of its components.
        //
        optguardreport ogrep;
        minbcoptguardresults(state, ogrep);
        printf("%s\n", ogrep.badgradsuspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc0suspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc1suspected ? "true" : "false"); // EXPECTED: false
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_func(const real_1d_array &x, double &func, void *ptr)
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x,y) = 100*(x+3)^4+(y-3)^4
        //
        // subject to box constraints
        //
        //    -1<=x<=+1, -1<=y<=+1
        //
        // using MinBC optimizer with:
        // * numerical differentiation being used
        // * initial point x=[0,0]
        // * unit scale being set for all variables (see minbcsetscale for more info)
        // * stopping criteria set to "terminate after short enough step"
        // * OptGuard integrity check being used to check problem statement
        //   for some common errors like nonsmoothness or bad analytic gradient
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[-1,-1]";
        real_1d_array bndu = "[+1,+1]";
        minbcstate state;
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        double diffstep = 1.0e-6;

        //
        // Now we are ready to actually optimize something:
        // * first we create optimizer
        // * we add boundary constraints
        // * we tune stopping conditions
        // * and, finally, optimize and obtain results...
        //
        minbccreatef(x, diffstep, state);
        minbcsetbc(state, bndl, bndu);
        minbcsetscale(state, s);
        minbcsetcond(state, epsg, epsf, epsx, maxits);

        //
        // Then we activate OptGuard integrity checking.
        //
        // Numerical differentiation always produces "correct" gradient
        // (with some truncation error, but unbiased). Thus, we just have
        // to check smoothness properties of the target: C0 and C1 continuity.
        //
        // Sometimes user accidentally tries to solve nonsmooth problems
        // with smooth optimizer. OptGuard helps to detect such situations
        // early, at the prototyping stage.
        //
        minbcoptguardsmoothness(state);

        //
        // Optimize and evaluate results
        //
        minbcreport rep;
        alglib::minbcoptimize(state, function1_func);
        minbcresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-1,1]

        //
        // Check that OptGuard did not report errors
        //
        // Want to challenge OptGuard? Try to make your problem
        // nonsmooth by replacing 100*(x+3)^4 by 100*|x+3| and
        // re-run optimizer.
        //
        optguardreport ogrep;
        minbcoptguardresults(state, ogrep);
        printf("%s\n", ogrep.nonc0suspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc1suspected ? "true" : "false"); // EXPECTED: false
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minbleicreport
minbleicstate
minbleiccreate
minbleiccreatef
minbleiciteration
minbleicoptguardgradient
minbleicoptguardnonc1test0results
minbleicoptguardnonc1test1results
minbleicoptguardresults
minbleicoptguardsmoothness
minbleicoptimize
minbleicrequesttermination
minbleicrestartfrom
minbleicresults
minbleicresultsbuf
minbleicsetbc
minbleicsetcond
minbleicsetlc
minbleicsetprecdefault
minbleicsetprecdiag
minbleicsetprecscale
minbleicsetscale
minbleicsetstpmax
minbleicsetxrep
minbleic_d_1 Nonlinear optimization with bound constraints
minbleic_d_2 Nonlinear optimization with linear inequality constraints
minbleic_numdiff Nonlinear optimization with bound constraints and numerical differentiation
/************************************************************************* This structure stores optimization report: * IterationsCount number of iterations * NFEV number of gradient evaluations * TerminationType termination type (see below) TERMINATION CODES TerminationType field contains completion code, which can be: -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. -3 inconsistent constraints. Feasible point is either nonexistent or too hard to find. Try to restart optimizer with better initial approximation 1 relative function improvement is no more than EpsF. 2 relative step is no more than EpsX. 4 gradient norm is no more than EpsG 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. 8 terminated by user who called minbleicrequesttermination(). X contains point which was "current accepted" when termination request was submitted. ADDITIONAL FIELDS There are additional fields which can be used for debugging: * DebugEqErr error in the equality constraints (2-norm) * DebugFS f, calculated at projection of initial point to the feasible set * DebugFF f, calculated at the final point * DebugDX |X_start-X_final| *************************************************************************/
class minbleicreport { public: minbleicreport(); minbleicreport(const minbleicreport &rhs); minbleicreport& operator=(const minbleicreport &rhs); virtual ~minbleicreport(); ae_int_t iterationscount; ae_int_t nfev; ae_int_t varidx; ae_int_t terminationtype; double debugeqerr; double debugfs; double debugff; double debugdx; ae_int_t debugfeasqpits; ae_int_t debugfeasgpaits; ae_int_t inneriterationscount; ae_int_t outeriterationscount; };
/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinBLEIC subpackage to work with this object *************************************************************************/
class minbleicstate { public: minbleicstate(); minbleicstate(const minbleicstate &rhs); minbleicstate& operator=(const minbleicstate &rhs); virtual ~minbleicstate(); };
/************************************************************************* BOUND CONSTRAINED OPTIMIZATION WITH ADDITIONAL LINEAR EQUALITY AND INEQUALITY CONSTRAINTS DESCRIPTION: The subroutine minimizes function F(x) of N arguments subject to any combination of: * bound constraints * linear inequality constraints * linear equality constraints REQUIREMENTS: * user must provide function value and gradient * starting point X0 must be feasible or not too far away from the feasible set * grad(f) must be Lipschitz continuous on a level set: L = { x : f(x)<=f(x0) } * function must be defined everywhere on the feasible set F USAGE: Constrained optimization if far more complex than the unconstrained one. Here we give very brief outline of the BLEIC optimizer. We strongly recommend you to read examples in the ALGLIB Reference Manual and to read ALGLIB User Guide on optimization, which is available at http://www.alglib.net/optimization/ 1. User initializes algorithm state with MinBLEICCreate() call 2. USer adds boundary and/or linear constraints by calling MinBLEICSetBC() and MinBLEICSetLC() functions. 3. User sets stopping conditions with MinBLEICSetCond(). 4. User calls MinBLEICOptimize() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 5. User calls MinBLEICResults() to get solution 6. Optionally user may call MinBLEICRestartFrom() to solve another problem with same N but another starting point. MinBLEICRestartFrom() allows to reuse already initialized structure. NOTE: if you have box-only constraints (no general linear constraints), then MinBC optimizer can be better option. It uses special, faster constraint activation method, which performs better on problems with multiple constraints active at the solution. On small-scale problems performance of MinBC is similar to that of MinBLEIC, but on large-scale ones (hundreds and thousands of active constraints) it can be several times faster than MinBLEIC. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size ofX X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to find feasible point first, using X as initial approximation. OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleiccreate(const ae_int_t n, const real_1d_array &x, minbleicstate &state, const xparams _xparams = alglib::xdefault); void minbleiccreate(const real_1d_array &x, minbleicstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* The subroutine is finite difference variant of MinBLEICCreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on MinBLEICCreate() in order to get more information about creation of BLEIC optimizer. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[0..N-1]. DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. algorithm uses 4-point central formula for differentiation. 2. differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is scaling vector which can be set by MinBLEICSetScale() call. 3. we recommend you to use moderate values of differentiation step. Too large step will result in too large truncation errors, while too small step will result in too large numerical errors. 1.0E-6 can be good value to start with. 4. Numerical differentiation is very inefficient - one gradient calculation needs 4*N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. CG needs exact gradient values. Imprecise gradient may slow down convergence, especially on highly nonlinear problems. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 16.05.2011 by Bochkanov Sergey *************************************************************************/
void minbleiccreatef(const ae_int_t n, const real_1d_array &x, const double diffstep, minbleicstate &state, const xparams _xparams = alglib::xdefault); void minbleiccreatef(const real_1d_array &x, const double diffstep, minbleicstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool minbleiciteration(minbleicstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function activates/deactivates verification of the user-supplied analytic gradient. Upon activation of this option OptGuard integrity checker performs numerical differentiation of your target function at the initial point (note: future versions may also perform check at the final point) and compares numerical gradient with analytic one provided by you. If difference is too large, an error flag is set and optimization session continues. After optimization session is over, you can retrieve the report which stores both gradients and specific components highlighted as suspicious by the OptGuard. The primary OptGuard report can be retrieved with minbleicoptguardresults(). IMPORTANT: gradient check is a high-overhead option which will cost you about 3*N additional function evaluations. In many cases it may cost as much as the rest of the optimization session. YOU SHOULD NOT USE IT IN THE PRODUCTION CODE UNLESS YOU WANT TO CHECK DERIVATIVES PROVIDED BY SOME THIRD PARTY. NOTE: unlike previous incarnation of the gradient checking code, OptGuard does NOT interrupt optimization even if it discovers bad gradient. INPUT PARAMETERS: State - structure used to store algorithm state TestStep - verification step used for numerical differentiation: * TestStep=0 turns verification off * TestStep>0 activates verification You should carefully choose TestStep. Value which is too large (so large that function behavior is non- cubic at this scale) will lead to false alarms. Too short step will result in rounding errors dominating numerical derivative. You may use different step for different parameters by means of setting scale with minbleicsetscale(). === EXPLANATION ========================================================== In order to verify gradient algorithm performs following steps: * two trial steps are made to X[i]-TestStep*S[i] and X[i]+TestStep*S[i], where X[i] is i-th component of the initial point and S[i] is a scale of i-th parameter * F(X) is evaluated at these trial points * we perform one more evaluation in the middle point of the interval * we build cubic model using function values and derivatives at trial points and we compare its prediction with actual value in the middle point -- ALGLIB -- Copyright 15.06.2014 by Bochkanov Sergey *************************************************************************/
void minbleicoptguardgradient(minbleicstate &state, const double teststep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #0 Nonsmoothness (non-C1) test #0 studies function values (not gradient!) obtained during line searches and monitors behavior of the directional derivative estimate. This test is less powerful than test #1, but it does not depend on the gradient values and thus it is more robust against artifacts introduced by numerical differentiation. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], f[] - arrays of length CNT which store step lengths and function values at these points; f[i] is evaluated in x0+stp[i]*d. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #0 "strong" report lngrep - C1 test #0 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minbleicoptguardnonc1test0results(const minbleicstate &state, optguardnonc1test0report &strrep, optguardnonc1test0report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #1 Nonsmoothness (non-C1) test #1 studies individual components of the gradient computed during line search. When precise analytic gradient is provided this test is more powerful than test #0 which works with function values and ignores user-provided gradient. However, test #0 becomes more powerful when numerical differentiation is employed (in such cases test #1 detects higher levels of numerical noise and becomes too conservative). This test also tells specific components of the gradient which violate C1 continuity, which makes it more informative than #0, which just tells that continuity is violated. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * vidx - is an index of the variable in [0,N) with nonsmooth derivative * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], g[] - arrays of length CNT which store step lengths and gradient values at these points; g[i] is evaluated in x0+stp[i]*d and contains vidx-th component of the gradient. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #1 "strong" report lngrep - C1 test #1 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minbleicoptguardnonc1test1results(minbleicstate &state, optguardnonc1test1report &strrep, optguardnonc1test1report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Results of OptGuard integrity check, should be called after optimization session is over. === PRIMARY REPORT ======================================================= OptGuard performs several checks which are intended to catch common errors in the implementation of nonlinear function/gradient: * incorrect analytic gradient * discontinuous (non-C0) target functions (constraints) * nonsmooth (non-C1) target functions (constraints) Each of these checks is activated with appropriate function: * minbleicoptguardgradient() for gradient verification * minbleicoptguardsmoothness() for C0/C1 checks Following flags are set when these errors are suspected: * rep.badgradsuspected, and additionally: * rep.badgradvidx for specific variable (gradient element) suspected * rep.badgradxbase, a point where gradient is tested * rep.badgraduser, user-provided gradient (stored as 2D matrix with single row in order to make report structure compatible with more complex optimizers like MinNLC or MinLM) * rep.badgradnum, reference gradient obtained via numerical differentiation (stored as 2D matrix with single row in order to make report structure compatible with more complex optimizers like MinNLC or MinLM) * rep.nonc0suspected * rep.nonc1suspected === ADDITIONAL REPORTS/LOGS ============================================== Several different tests are performed to catch C0/C1 errors, you can find out specific test signaled error by looking to: * rep.nonc0test0positive, for non-C0 test #0 * rep.nonc1test0positive, for non-C1 test #0 * rep.nonc1test1positive, for non-C1 test #1 Additional information (including line search logs) can be obtained by means of: * minbleicoptguardnonc1test0results() * minbleicoptguardnonc1test1results() which return detailed error reports, specific points where discontinuities were found, and so on. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: rep - generic OptGuard report; more detailed reports can be retrieved with other functions. NOTE: false negatives (nonsmooth problems are not identified as nonsmooth ones) are possible although unlikely. The reason is that you need to make several evaluations around nonsmoothness in order to accumulate enough information about function curvature. Say, if you start right from the nonsmooth point, optimizer simply won't get enough data to understand what is going wrong before it terminates due to abrupt changes in the derivative. It is also possible that "unlucky" step will move us to the termination too quickly. Our current approach is to have less than 0.1% false negatives in our test examples (measured with multiple restarts from random points), and to have exactly 0% false positives. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minbleicoptguardresults(minbleicstate &state, optguardreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function activates/deactivates nonsmoothness monitoring option of the OptGuard integrity checker. Smoothness monitor silently observes solution process and tries to detect ill-posed problems, i.e. ones with: a) discontinuous target function (non-C0) b) nonsmooth target function (non-C1) Smoothness monitoring does NOT interrupt optimization even if it suspects that your problem is nonsmooth. It just sets corresponding flags in the OptGuard report which can be retrieved after optimization is over. Smoothness monitoring is a moderate overhead option which often adds less than 1% to the optimizer running time. Thus, you can use it even for large scale problems. NOTE: OptGuard does NOT guarantee that it will always detect C0/C1 continuity violations. First, minor errors are hard to catch - say, a 0.0001 difference in the model values at two sides of the gap may be due to discontinuity of the model - or simply because the model has changed. Second, C1-violations are especially difficult to detect in a noninvasive way. The optimizer usually performs very short steps near the nonsmoothness, and differentiation usually introduces a lot of numerical noise. It is hard to tell whether some tiny discontinuity in the slope is due to real nonsmoothness or just due to numerical noise alone. Our top priority was to avoid false positives, so in some rare cases minor errors may went unnoticed (however, in most cases they can be spotted with restart from different initial point). INPUT PARAMETERS: state - algorithm state level - monitoring level: * 0 - monitoring is disabled * 1 - noninvasive low-overhead monitoring; function values and/or gradients are recorded, but OptGuard does not try to perform additional evaluations in order to get more information about suspicious locations. === EXPLANATION ========================================================== One major source of headache during optimization is the possibility of the coding errors in the target function/constraints (or their gradients). Such errors most often manifest themselves as discontinuity or nonsmoothness of the target/constraints. Another frequent situation is when you try to optimize something involving lots of min() and max() operations, i.e. nonsmooth target. Although not a coding error, it is nonsmoothness anyway - and smooth optimizers usually stop right after encountering nonsmoothness, well before reaching solution. OptGuard integrity checker helps you to catch such situations: it monitors function values/gradients being passed to the optimizer and tries to errors. Upon discovering suspicious pair of points it raises appropriate flag (and allows you to continue optimization). When optimization is done, you can study OptGuard result. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minbleicoptguardsmoothness(minbleicstate &state, const ae_int_t level, const xparams _xparams = alglib::xdefault); void minbleicoptguardsmoothness(minbleicstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. This function has two different implementations: one which uses exact (analytical) user-supplied gradient, and one which uses function value only and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object (either MinBLEICCreate() for analytical gradient or MinBLEICCreateF() for numerical differentiation) you should choose appropriate variant of MinBLEICOptimize() - one which accepts function AND gradient or one which accepts function ONLY. Be careful to choose variant of MinBLEICOptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to MinBLEICOptimize() and specific function used to create optimizer. | USER PASSED TO MinBLEICOptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ MinBLEICCreateF() | work FAIL MinBLEICCreate() | FAIL work Here "FAIL" denotes inappropriate combinations of optimizer creation function and MinBLEICOptimize() version. Attemps to use such combination (for example, to create optimizer with MinBLEICCreateF() and to pass gradient information to MinBLEICOptimize()) will lead to exception being thrown. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicoptimize(minbleicstate &state, void (*func)(const real_1d_array &x, double &func, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void minbleicoptimize(minbleicstate &state, void (*grad)(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This subroutine submits request for termination of running optimizer. It should be called from user-supplied callback when user decides that it is time to "smoothly" terminate optimization process. As result, optimizer stops at point which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 08.10.2014 by Bochkanov Sergey *************************************************************************/
void minbleicrequesttermination(minbleicstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts algorithm from new point. All optimization parameters (including constraints) are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure previously allocated with MinBLEICCreate call. X - new starting point. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicrestartfrom(minbleicstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* BLEIC results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report. You should check Rep.TerminationType in order to distinguish successful termination from unsuccessful one: * -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. * -3 inconsistent constraints. Feasible point is either nonexistent or too hard to find. Try to restart optimizer with better initial approximation * 1 relative function improvement is no more than EpsF. * 2 scaled step is no more than EpsX. * 4 scaled gradient norm is no more than EpsG. * 5 MaxIts steps was taken * 8 terminated by user who called minbleicrequesttermination(). X contains point which was "current accepted" when termination request was submitted. More information about fields of this structure can be found in the comments on MinBLEICReport datatype. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicresults(const minbleicstate &state, real_1d_array &x, minbleicreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* BLEIC results Buffered implementation of MinBLEICResults() which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicresultsbuf(const minbleicstate &state, real_1d_array &x, minbleicreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets boundary constraints for BLEIC optimizer. Boundary constraints are inactive by default (after initial creation). They are preserved after algorithm restart with MinBLEICRestartFrom(). NOTE: if you have box-only constraints (no general linear constraints), then MinBC optimizer can be better option. It uses special, faster constraint activation method, which performs better on problems with multiple constraints active at the solution. On small-scale problems performance of MinBC is similar to that of MinBLEIC, but on large-scale ones (hundreds and thousands of active constraints) it can be several times faster than MinBLEIC. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF. BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF. NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: this solver has following useful properties: * bound constraints are always satisfied exactly * function is evaluated only INSIDE area specified by bound constraints, even when numerical differentiation is used (algorithm adjusts nodes according to boundary constraints) -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetbc(minbleicstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets stopping conditions for the optimizer. INPUT PARAMETERS: State - structure which stores algorithm state EpsG - >=0 The subroutine finishes its work if the condition |v|<EpsG is satisfied, where: * |.| means Euclidian norm * v - scaled gradient vector, v[i]=g[i]*s[i] * g - gradient * s - scaling coefficients set by MinBLEICSetScale() EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - step vector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinBLEICSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsG=0, EpsF=0 and EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection. NOTE: when SetCond() called with non-zero MaxIts, BLEIC solver may perform slightly more than MaxIts iterations. I.e., MaxIts sets non-strict limit on iterations count. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetcond(minbleicstate &state, const double epsg, const double epsf, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This function sets linear constraints for BLEIC optimizer. Linear constraints are inactive by default (after initial creation). They are preserved after algorithm restart with MinBLEICRestartFrom(). INPUT PARAMETERS: State - structure previously allocated with MinBLEICCreate call. C - linear constraints, array[K,N+1]. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. CT - type of constraints, array[K]: * if CT[i]>0, then I-th constraint is C[i,*]*x >= C[i,n] * if CT[i]=0, then I-th constraint is C[i,*]*x = C[i,n] * if CT[i]<0, then I-th constraint is C[i,*]*x <= C[i,n] K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT NOTE 1: linear (non-bound) constraints are satisfied only approximately: * there always exists some minor violation (about Epsilon in magnitude) due to rounding errors * numerical differentiation, if used, may lead to function evaluations outside of the feasible area, because algorithm does NOT change numerical differentiation formula according to linear constraints. If you want constraints to be satisfied exactly, try to reformulate your problem in such manner that all constraints will become boundary ones (this kind of constraints is always satisfied exactly, both in the final solution and in all intermediate points). -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetlc(minbleicstate &state, const real_2d_array &c, const integer_1d_array &ct, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minbleicsetlc(minbleicstate &state, const real_2d_array &c, const integer_1d_array &ct, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Modification of the preconditioner: preconditioning is turned off. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetprecdefault(minbleicstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modification of the preconditioner: diagonal of approximate Hessian is used. INPUT PARAMETERS: State - structure which stores algorithm state D - diagonal of the approximate Hessian, array[0..N-1], (if larger, only leading N elements are used). NOTE 1: D[i] should be positive. Exception will be thrown otherwise. NOTE 2: you should pass diagonal of approximate Hessian - NOT ITS INVERSE. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetprecdiag(minbleicstate &state, const real_1d_array &d, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modification of the preconditioner: scale-based diagonal preconditioning. This preconditioning mode can be useful when you don't have approximate diagonal of Hessian, but you know that your variables are badly scaled (for example, one variable is in [1,10], and another in [1000,100000]), and most part of the ill-conditioning comes from different scales of vars. In this case simple scale-based preconditioner, with H[i] = 1/(s[i]^2), can greatly improve convergence. IMPRTANT: you should set scale of your variables with MinBLEICSetScale() call (before or after MinBLEICSetPrecScale() call). Without knowledge of the scale of your variables scale-based preconditioner will be just unit matrix. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetprecscale(minbleicstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets scaling coefficients for BLEIC optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Scaling is also used by finite difference variant of the optimizer - step along I-th axis is equal to DiffStep*S[I]. In most optimizers (and in the BLEIC too) scaling is NOT a form of preconditioning. It just affects stopping conditions. You should set preconditioner by separate call to one of the MinBLEICSetPrec...() functions. There is a special preconditioning mode, however, which uses scaling coefficients to form diagonal preconditioning matrix. You can turn this mode on, if you want. But you should understand that scaling is not the same thing as preconditioning - these are two different, although related forms of tuning solver. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void minbleicsetscale(minbleicstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets maximum step length IMPORTANT: this feature is hard to combine with preconditioning. You can't set upper limit on step length, when you solve optimization problem with linear (non-boundary) constraints AND preconditioner turned on. When non-boundary constraints are present, you have to either a) use preconditioner, or b) use upper limit on step length. YOU CAN'T USE BOTH! In this case algorithm will terminate with appropriate error code. INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which lead to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetstpmax(minbleicstate &state, const double stpmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinBLEICOptimize(). -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetxrep(minbleicstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_grad(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) 
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
    grad[0] = 400*pow(x[0]+3,3);
    grad[1] = 4*pow(x[1]-3,3);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x,y) = 100*(x+3)^4+(y-3)^4
        //
        // subject to box constraints
        //
        //     -1<=x<=+1, -1<=y<=+1
        //
        // using BLEIC optimizer with:
        // * initial point x=[0,0]
        // * unit scale being set for all variables (see minbleicsetscale for more info)
        // * stopping criteria set to "terminate after short enough step"
        // * OptGuard integrity check being used to check problem statement
        //   for some common errors like nonsmoothness or bad analytic gradient
        //
        // First, we create optimizer object and tune its properties:
        // * set box constraints
        // * set variable scales
        // * set stopping criteria
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[-1,-1]";
        real_1d_array bndu = "[+1,+1]";
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        minbleicstate state;
        minbleiccreate(x, state);
        minbleicsetbc(state, bndl, bndu);
        minbleicsetscale(state, s);
        minbleicsetcond(state, epsg, epsf, epsx, maxits);

        //
        // Then we activate OptGuard integrity checking.
        //
        // OptGuard monitor helps to catch common coding and problem statement
        // issues, like:
        // * discontinuity of the target function (C0 continuity violation)
        // * nonsmoothness of the target function (C1 continuity violation)
        // * erroneous analytic gradient, i.e. one inconsistent with actual
        //   change in the target/constraints
        //
        // OptGuard is essential for early prototyping stages because such
        // problems often result in premature termination of the optimizer
        // which is really hard to distinguish from the correct termination.
        //
        // IMPORTANT: GRADIENT VERIFICATION IS PERFORMED BY MEANS OF NUMERICAL
        //            DIFFERENTIATION. DO NOT USE IT IN PRODUCTION CODE!!!!!!!
        //
        //            Other OptGuard checks add moderate overhead, but anyway
        //            it is better to turn them off when they are not needed.
        //
        minbleicoptguardsmoothness(state);
        minbleicoptguardgradient(state, 0.001);

        //
        // Optimize and evaluate results
        //
        minbleicreport rep;
        alglib::minbleicoptimize(state, function1_grad);
        minbleicresults(state, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 4
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-1,1]

        //
        // Check that OptGuard did not report errors
        //
        // NOTE: want to test OptGuard? Try breaking the gradient - say, add
        //       1.0 to some of its components.
        //
        optguardreport ogrep;
        minbleicoptguardresults(state, ogrep);
        printf("%s\n", ogrep.badgradsuspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc0suspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc1suspected ? "true" : "false"); // EXPECTED: false
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_grad(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) 
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
    grad[0] = 400*pow(x[0]+3,3);
    grad[1] = 4*pow(x[1]-3,3);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x,y) = 100*(x+3)^4+(y-3)^4
        //
        // subject to inequality constraints
        //
        // * x>=2 (posed as general linear constraint),
        // * x+y>=6
        //
        // using BLEIC optimizer with
        // * initial point x=[0,0]
        // * unit scale being set for all variables (see minbleicsetscale for more info)
        // * stopping criteria set to "terminate after short enough step"
        // * OptGuard integrity check being used to check problem statement
        //   for some common errors like nonsmoothness or bad analytic gradient
        //
        // First, we create optimizer object and tune its properties:
        // * set linear constraints
        // * set variable scales
        // * set stopping criteria
        //
        real_1d_array x = "[5,5]";
        real_1d_array s = "[1,1]";
        real_2d_array c = "[[1,0,2],[1,1,6]]";
        integer_1d_array ct = "[1,1]";
        minbleicstate state;
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.000001;
        ae_int_t maxits = 0;

        minbleiccreate(x, state);
        minbleicsetlc(state, c, ct);
        minbleicsetscale(state, s);
        minbleicsetcond(state, epsg, epsf, epsx, maxits);

        //
        // Then we activate OptGuard integrity checking.
        //
        // OptGuard monitor helps to catch common coding and problem statement
        // issues, like:
        // * discontinuity of the target function (C0 continuity violation)
        // * nonsmoothness of the target function (C1 continuity violation)
        // * erroneous analytic gradient, i.e. one inconsistent with actual
        //   change in the target/constraints
        //
        // OptGuard is essential for early prototyping stages because such
        // problems often result in premature termination of the optimizer
        // which is really hard to distinguish from the correct termination.
        //
        // IMPORTANT: GRADIENT VERIFICATION IS PERFORMED BY MEANS OF NUMERICAL
        //            DIFFERENTIATION. DO NOT USE IT IN PRODUCTION CODE!!!!!!!
        //
        //            Other OptGuard checks add moderate overhead, but anyway
        //            it is better to turn them off when they are not needed.
        //
        minbleicoptguardsmoothness(state);
        minbleicoptguardgradient(state, 0.001);

        //
        // Optimize and evaluate results
        //
        minbleicreport rep;
        alglib::minbleicoptimize(state, function1_grad);
        minbleicresults(state, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 4
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [2,4]

        //
        // Check that OptGuard did not report errors
        //
        // NOTE: want to test OptGuard? Try breaking the gradient - say, add
        //       1.0 to some of its components.
        //
        optguardreport ogrep;
        minbleicoptguardresults(state, ogrep);
        printf("%s\n", ogrep.badgradsuspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc0suspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc1suspected ? "true" : "false"); // EXPECTED: false
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_func(const real_1d_array &x, double &func, void *ptr)
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x,y) = 100*(x+3)^4+(y-3)^4
        //
        // subject to box constraints
        //
        //     -1<=x<=+1, -1<=y<=+1
        //
        // using BLEIC optimizer with:
        // * numerical differentiation being used
        // * initial point x=[0,0]
        // * unit scale being set for all variables (see minbleicsetscale for more info)
        // * stopping criteria set to "terminate after short enough step"
        // * OptGuard integrity check being used to check problem statement
        //   for some common errors like nonsmoothness or bad analytic gradient
        //
        // First, we create optimizer object and tune its properties:
        // * set box constraints
        // * set variable scales
        // * set stopping criteria
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[-1,-1]";
        real_1d_array bndu = "[+1,+1]";
        minbleicstate state;
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        double diffstep = 1.0e-6;

        minbleiccreatef(x, diffstep, state);
        minbleicsetbc(state, bndl, bndu);
        minbleicsetscale(state, s);
        minbleicsetcond(state, epsg, epsf, epsx, maxits);

        //
        // Then we activate OptGuard integrity checking.
        //
        // Numerical differentiation always produces "correct" gradient
        // (with some truncation error, but unbiased). Thus, we just have
        // to check smoothness properties of the target: C0 and C1 continuity.
        //
        // Sometimes user accidentally tries to solve nonsmooth problems
        // with smooth optimizer. OptGuard helps to detect such situations
        // early, at the prototyping stage.
        //
        minbleicoptguardsmoothness(state);

        //
        // Optimize and evaluate results
        //
        minbleicreport rep;
        alglib::minbleicoptimize(state, function1_func);
        minbleicresults(state, x, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 4
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-1,1]

        //
        // Check that OptGuard did not report errors
        //
        // Want to challenge OptGuard? Try to make your problem
        // nonsmooth by replacing 100*(x+3)^4 by 100*|x+3| and
        // re-run optimizer.
        //
        optguardreport ogrep;
        minbleicoptguardresults(state, ogrep);
        printf("%s\n", ogrep.nonc0suspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc1suspected ? "true" : "false"); // EXPECTED: false
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

mincgreport
mincgstate
mincgcreate
mincgcreatef
mincgiteration
mincgoptguardgradient
mincgoptguardnonc1test0results
mincgoptguardnonc1test1results
mincgoptguardresults
mincgoptguardsmoothness
mincgoptimize
mincgrequesttermination
mincgrestartfrom
mincgresults
mincgresultsbuf
mincgsetcgtype
mincgsetcond
mincgsetprecdefault
mincgsetprecdiag
mincgsetprecscale
mincgsetscale
mincgsetstpmax
mincgsetxrep
mincgsuggeststep
mincg_d_1 Nonlinear optimization by CG
mincg_d_2 Nonlinear optimization with additional settings and restarts
mincg_numdiff Nonlinear optimization by CG with numerical differentiation
/************************************************************************* This structure stores optimization report: * IterationsCount total number of inner iterations * NFEV number of gradient evaluations * TerminationType termination type (see below) TERMINATION CODES TerminationType field contains completion code, which can be: -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. 1 relative function improvement is no more than EpsF. 2 relative step is no more than EpsX. 4 gradient norm is no more than EpsG 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. 8 terminated by user who called mincgrequesttermination(). X contains point which was "current accepted" when termination request was submitted. Other fields of this structure are not documented and should not be used! *************************************************************************/
class mincgreport { public: mincgreport(); mincgreport(const mincgreport &rhs); mincgreport& operator=(const mincgreport &rhs); virtual ~mincgreport(); ae_int_t iterationscount; ae_int_t nfev; ae_int_t terminationtype; };
/************************************************************************* This object stores state of the nonlinear CG optimizer. You should use ALGLIB functions to work with this object. *************************************************************************/
class mincgstate { public: mincgstate(); mincgstate(const mincgstate &rhs); mincgstate& operator=(const mincgstate &rhs); virtual ~mincgstate(); };
/************************************************************************* NONLINEAR CONJUGATE GRADIENT METHOD DESCRIPTION: The subroutine minimizes function F(x) of N arguments by using one of the nonlinear conjugate gradient methods. These CG methods are globally convergent (even on non-convex functions) as long as grad(f) is Lipschitz continuous in a some neighborhood of the L = { x : f(x)<=f(x0) }. REQUIREMENTS: Algorithm will request following information during its operation: * function value F and its gradient G (simultaneously) at given point X USAGE: 1. User initializes algorithm state with MinCGCreate() call 2. User tunes solver parameters with MinCGSetCond(), MinCGSetStpMax() and other functions 3. User calls MinCGOptimize() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 4. User calls MinCGResults() to get solution 5. Optionally, user may call MinCGRestartFrom() to solve another problem with same N but another starting point and/or another function. MinCGRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[0..N-1]. OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 25.03.2010 by Bochkanov Sergey *************************************************************************/
void mincgcreate(const ae_int_t n, const real_1d_array &x, mincgstate &state, const xparams _xparams = alglib::xdefault); void mincgcreate(const real_1d_array &x, mincgstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* The subroutine is finite difference variant of MinCGCreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on MinCGCreate() in order to get more information about creation of CG optimizer. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[0..N-1]. DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. algorithm uses 4-point central formula for differentiation. 2. differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is scaling vector which can be set by MinCGSetScale() call. 3. we recommend you to use moderate values of differentiation step. Too large step will result in too large truncation errors, while too small step will result in too large numerical errors. 1.0E-6 can be good value to start with. 4. Numerical differentiation is very inefficient - one gradient calculation needs 4*N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. L-BFGS needs exact gradient values. Imprecise gradient may slow down convergence, especially on highly nonlinear problems. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 16.05.2011 by Bochkanov Sergey *************************************************************************/
void mincgcreatef(const ae_int_t n, const real_1d_array &x, const double diffstep, mincgstate &state, const xparams _xparams = alglib::xdefault); void mincgcreatef(const real_1d_array &x, const double diffstep, mincgstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool mincgiteration(mincgstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function activates/deactivates verification of the user-supplied analytic gradient. Upon activation of this option OptGuard integrity checker performs numerical differentiation of your target function at the initial point (note: future versions may also perform check at the final point) and compares numerical gradient with analytic one provided by you. If difference is too large, an error flag is set and optimization session continues. After optimization session is over, you can retrieve the report which stores both gradients and specific components highlighted as suspicious by the OptGuard. The primary OptGuard report can be retrieved with mincgoptguardresults(). IMPORTANT: gradient check is a high-overhead option which will cost you about 3*N additional function evaluations. In many cases it may cost as much as the rest of the optimization session. YOU SHOULD NOT USE IT IN THE PRODUCTION CODE UNLESS YOU WANT TO CHECK DERIVATIVES PROVIDED BY SOME THIRD PARTY. NOTE: unlike previous incarnation of the gradient checking code, OptGuard does NOT interrupt optimization even if it discovers bad gradient. INPUT PARAMETERS: State - structure used to store algorithm state TestStep - verification step used for numerical differentiation: * TestStep=0 turns verification off * TestStep>0 activates verification You should carefully choose TestStep. Value which is too large (so large that function behavior is non- cubic at this scale) will lead to false alarms. Too short step will result in rounding errors dominating numerical derivative. You may use different step for different parameters by means of setting scale with mincgsetscale(). === EXPLANATION ========================================================== In order to verify gradient algorithm performs following steps: * two trial steps are made to X[i]-TestStep*S[i] and X[i]+TestStep*S[i], where X[i] is i-th component of the initial point and S[i] is a scale of i-th parameter * F(X) is evaluated at these trial points * we perform one more evaluation in the middle point of the interval * we build cubic model using function values and derivatives at trial points and we compare its prediction with actual value in the middle point -- ALGLIB -- Copyright 15.06.2014 by Bochkanov Sergey *************************************************************************/
void mincgoptguardgradient(mincgstate &state, const double teststep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #0 Nonsmoothness (non-C1) test #0 studies function values (not gradient!) obtained during line searches and monitors behavior of the directional derivative estimate. This test is less powerful than test #1, but it does not depend on the gradient values and thus it is more robust against artifacts introduced by numerical differentiation. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], f[] - arrays of length CNT which store step lengths and function values at these points; f[i] is evaluated in x0+stp[i]*d. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #0 "strong" report lngrep - C1 test #0 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void mincgoptguardnonc1test0results(const mincgstate &state, optguardnonc1test0report &strrep, optguardnonc1test0report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #1 Nonsmoothness (non-C1) test #1 studies individual components of the gradient computed during line search. When precise analytic gradient is provided this test is more powerful than test #0 which works with function values and ignores user-provided gradient. However, test #0 becomes more powerful when numerical differentiation is employed (in such cases test #1 detects higher levels of numerical noise and becomes too conservative). This test also tells specific components of the gradient which violate C1 continuity, which makes it more informative than #0, which just tells that continuity is violated. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * vidx - is an index of the variable in [0,N) with nonsmooth derivative * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], g[] - arrays of length CNT which store step lengths and gradient values at these points; g[i] is evaluated in x0+stp[i]*d and contains vidx-th component of the gradient. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #1 "strong" report lngrep - C1 test #1 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void mincgoptguardnonc1test1results(mincgstate &state, optguardnonc1test1report &strrep, optguardnonc1test1report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Results of OptGuard integrity check, should be called after optimization session is over. === PRIMARY REPORT ======================================================= OptGuard performs several checks which are intended to catch common errors in the implementation of nonlinear function/gradient: * incorrect analytic gradient * discontinuous (non-C0) target functions (constraints) * nonsmooth (non-C1) target functions (constraints) Each of these checks is activated with appropriate function: * mincgoptguardgradient() for gradient verification * mincgoptguardsmoothness() for C0/C1 checks Following flags are set when these errors are suspected: * rep.badgradsuspected, and additionally: * rep.badgradvidx for specific variable (gradient element) suspected * rep.badgradxbase, a point where gradient is tested * rep.badgraduser, user-provided gradient (stored as 2D matrix with single row in order to make report structure compatible with more complex optimizers like MinNLC or MinLM) * rep.badgradnum, reference gradient obtained via numerical differentiation (stored as 2D matrix with single row in order to make report structure compatible with more complex optimizers like MinNLC or MinLM) * rep.nonc0suspected * rep.nonc1suspected === ADDITIONAL REPORTS/LOGS ============================================== Several different tests are performed to catch C0/C1 errors, you can find out specific test signaled error by looking to: * rep.nonc0test0positive, for non-C0 test #0 * rep.nonc1test0positive, for non-C1 test #0 * rep.nonc1test1positive, for non-C1 test #1 Additional information (including line search logs) can be obtained by means of: * mincgoptguardnonc1test0results() * mincgoptguardnonc1test1results() which return detailed error reports, specific points where discontinuities were found, and so on. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: rep - generic OptGuard report; more detailed reports can be retrieved with other functions. NOTE: false negatives (nonsmooth problems are not identified as nonsmooth ones) are possible although unlikely. The reason is that you need to make several evaluations around nonsmoothness in order to accumulate enough information about function curvature. Say, if you start right from the nonsmooth point, optimizer simply won't get enough data to understand what is going wrong before it terminates due to abrupt changes in the derivative. It is also possible that "unlucky" step will move us to the termination too quickly. Our current approach is to have less than 0.1% false negatives in our test examples (measured with multiple restarts from random points), and to have exactly 0% false positives. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void mincgoptguardresults(mincgstate &state, optguardreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function activates/deactivates nonsmoothness monitoring option of the OptGuard integrity checker. Smoothness monitor silently observes solution process and tries to detect ill-posed problems, i.e. ones with: a) discontinuous target function (non-C0) b) nonsmooth target function (non-C1) Smoothness monitoring does NOT interrupt optimization even if it suspects that your problem is nonsmooth. It just sets corresponding flags in the OptGuard report which can be retrieved after optimization is over. Smoothness monitoring is a moderate overhead option which often adds less than 1% to the optimizer running time. Thus, you can use it even for large scale problems. NOTE: OptGuard does NOT guarantee that it will always detect C0/C1 continuity violations. First, minor errors are hard to catch - say, a 0.0001 difference in the model values at two sides of the gap may be due to discontinuity of the model - or simply because the model has changed. Second, C1-violations are especially difficult to detect in a noninvasive way. The optimizer usually performs very short steps near the nonsmoothness, and differentiation usually introduces a lot of numerical noise. It is hard to tell whether some tiny discontinuity in the slope is due to real nonsmoothness or just due to numerical noise alone. Our top priority was to avoid false positives, so in some rare cases minor errors may went unnoticed (however, in most cases they can be spotted with restart from different initial point). INPUT PARAMETERS: state - algorithm state level - monitoring level: * 0 - monitoring is disabled * 1 - noninvasive low-overhead monitoring; function values and/or gradients are recorded, but OptGuard does not try to perform additional evaluations in order to get more information about suspicious locations. === EXPLANATION ========================================================== One major source of headache during optimization is the possibility of the coding errors in the target function/constraints (or their gradients). Such errors most often manifest themselves as discontinuity or nonsmoothness of the target/constraints. Another frequent situation is when you try to optimize something involving lots of min() and max() operations, i.e. nonsmooth target. Although not a coding error, it is nonsmoothness anyway - and smooth optimizers usually stop right after encountering nonsmoothness, well before reaching solution. OptGuard integrity checker helps you to catch such situations: it monitors function values/gradients being passed to the optimizer and tries to errors. Upon discovering suspicious pair of points it raises appropriate flag (and allows you to continue optimization). When optimization is done, you can study OptGuard result. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void mincgoptguardsmoothness(mincgstate &state, const ae_int_t level, const xparams _xparams = alglib::xdefault); void mincgoptguardsmoothness(mincgstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. This function has two different implementations: one which uses exact (analytical) user-supplied gradient, and one which uses function value only and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object (either MinCGCreate() for analytical gradient or MinCGCreateF() for numerical differentiation) you should choose appropriate variant of MinCGOptimize() - one which accepts function AND gradient or one which accepts function ONLY. Be careful to choose variant of MinCGOptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to MinCGOptimize() and specific function used to create optimizer. | USER PASSED TO MinCGOptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ MinCGCreateF() | work FAIL MinCGCreate() | FAIL work Here "FAIL" denotes inappropriate combinations of optimizer creation function and MinCGOptimize() version. Attemps to use such combination (for example, to create optimizer with MinCGCreateF() and to pass gradient information to MinCGOptimize()) will lead to exception being thrown. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 20.04.2009 by Bochkanov Sergey *************************************************************************/
void mincgoptimize(mincgstate &state, void (*func)(const real_1d_array &x, double &func, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void mincgoptimize(mincgstate &state, void (*grad)(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This subroutine submits request for termination of running optimizer. It should be called from user-supplied callback when user decides that it is time to "smoothly" terminate optimization process. As result, optimizer stops at point which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 08.10.2014 by Bochkanov Sergey *************************************************************************/
void mincgrequesttermination(mincgstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts CG algorithm from new point. All optimization parameters are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure used to store algorithm state. X - new starting point. -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
void mincgrestartfrom(mincgstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Conjugate gradient results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. * -7 gradient verification failed. See MinCGSetGradientCheck() for more information. * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient norm is no more than EpsG * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible, we return best X found so far * 8 terminated by user * Rep.IterationsCount contains iterations count * NFEV countains number of function calculations -- ALGLIB -- Copyright 20.04.2009 by Bochkanov Sergey *************************************************************************/
void mincgresults(const mincgstate &state, real_1d_array &x, mincgreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* Conjugate gradient results Buffered implementation of MinCGResults(), which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 20.04.2009 by Bochkanov Sergey *************************************************************************/
void mincgresultsbuf(const mincgstate &state, real_1d_array &x, mincgreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets CG algorithm. INPUT PARAMETERS: State - structure which stores algorithm state CGType - algorithm type: * -1 automatic selection of the best algorithm * 0 DY (Dai and Yuan) algorithm * 1 Hybrid DY-HS algorithm -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void mincgsetcgtype(mincgstate &state, const ae_int_t cgtype, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping conditions for CG optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state EpsG - >=0 The subroutine finishes its work if the condition |v|<EpsG is satisfied, where: * |.| means Euclidian norm * v - scaled gradient vector, v[i]=g[i]*s[i] * g - gradient * s - scaling coefficients set by MinCGSetScale() EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinCGSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void mincgsetcond(mincgstate &state, const double epsg, const double epsf, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* Modification of the preconditioner: preconditioning is turned off. INPUT PARAMETERS: State - structure which stores algorithm state NOTE: you can change preconditioner "on the fly", during algorithm iterations. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void mincgsetprecdefault(mincgstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modification of the preconditioner: diagonal of approximate Hessian is used. INPUT PARAMETERS: State - structure which stores algorithm state D - diagonal of the approximate Hessian, array[0..N-1], (if larger, only leading N elements are used). NOTE: you can change preconditioner "on the fly", during algorithm iterations. NOTE 2: D[i] should be positive. Exception will be thrown otherwise. NOTE 3: you should pass diagonal of approximate Hessian - NOT ITS INVERSE. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void mincgsetprecdiag(mincgstate &state, const real_1d_array &d, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modification of the preconditioner: scale-based diagonal preconditioning. This preconditioning mode can be useful when you don't have approximate diagonal of Hessian, but you know that your variables are badly scaled (for example, one variable is in [1,10], and another in [1000,100000]), and most part of the ill-conditioning comes from different scales of vars. In this case simple scale-based preconditioner, with H[i] = 1/(s[i]^2), can greatly improve convergence. IMPRTANT: you should set scale of your variables with MinCGSetScale() call (before or after MinCGSetPrecScale() call). Without knowledge of the scale of your variables scale-based preconditioner will be just unit matrix. INPUT PARAMETERS: State - structure which stores algorithm state NOTE: you can change preconditioner "on the fly", during algorithm iterations. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void mincgsetprecscale(mincgstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets scaling coefficients for CG optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Scaling is also used by finite difference variant of CG optimizer - step along I-th axis is equal to DiffStep*S[I]. In most optimizers (and in the CG too) scaling is NOT a form of preconditioning. It just affects stopping conditions. You should set preconditioner by separate call to one of the MinCGSetPrec...() functions. There is special preconditioning mode, however, which uses scaling coefficients to form diagonal preconditioning matrix. You can turn this mode on, if you want. But you should understand that scaling is not the same thing as preconditioning - these are two different, although related forms of tuning solver. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void mincgsetscale(mincgstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void mincgsetstpmax(mincgstate &state, const double stpmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinCGOptimize(). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void mincgsetxrep(mincgstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function allows to suggest initial step length to the CG algorithm. Suggested step length is used as starting point for the line search. It can be useful when you have badly scaled problem, i.e. when ||grad|| (which is used as initial estimate for the first step) is many orders of magnitude different from the desired step. Line search may fail on such problems without good estimate of initial step length. Imagine, for example, problem with ||grad||=10^50 and desired step equal to 0.1 Line search function will use 10^50 as initial step, then it will decrease step length by 2 (up to 20 attempts) and will get 10^44, which is still too large. This function allows us to tell than line search should be started from some moderate step length, like 1.0, so algorithm will be able to detect desired step length in a several searches. Default behavior (when no step is suggested) is to use preconditioner, if it is available, to generate initial estimate of step length. This function influences only first iteration of algorithm. It should be called between MinCGCreate/MinCGRestartFrom() call and MinCGOptimize call. Suggested step is ignored if you have preconditioner. INPUT PARAMETERS: State - structure used to store algorithm state. Stp - initial estimate of the step length. Can be zero (no estimate). -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
void mincgsuggeststep(mincgstate &state, const double stp, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_grad(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) 
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
    grad[0] = 400*pow(x[0]+3,3);
    grad[1] = 4*pow(x[1]-3,3);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x,y) = 100*(x+3)^4+(y-3)^4
        //
        // using nonlinear conjugate gradient method with:
        // * initial point x=[0,0]
        // * unit scale being set for all variables (see mincgsetscale for more info)
        // * stopping criteria set to "terminate after short enough step"
        // * OptGuard integrity check being used to check problem statement
        //   for some common errors like nonsmoothness or bad analytic gradient
        //
        // First, we create optimizer object and tune its properties
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.0000000001;
        ae_int_t maxits = 0;
        mincgstate state;
        mincgcreate(x, state);
        mincgsetcond(state, epsg, epsf, epsx, maxits);
        mincgsetscale(state, s);

        //
        // Activate OptGuard integrity checking.
        //
        // OptGuard monitor helps to catch common coding and problem statement
        // issues, like:
        // * discontinuity of the target function (C0 continuity violation)
        // * nonsmoothness of the target function (C1 continuity violation)
        // * erroneous analytic gradient, i.e. one inconsistent with actual
        //   change in the target/constraints
        //
        // OptGuard is essential for early prototyping stages because such
        // problems often result in premature termination of the optimizer
        // which is really hard to distinguish from the correct termination.
        //
        // IMPORTANT: GRADIENT VERIFICATION IS PERFORMED BY MEANS OF NUMERICAL
        //            DIFFERENTIATION. DO NOT USE IT IN PRODUCTION CODE!!!!!!!
        //
        //            Other OptGuard checks add moderate overhead, but anyway
        //            it is better to turn them off when they are not needed.
        //
        mincgoptguardsmoothness(state);
        mincgoptguardgradient(state, 0.001);

        //
        // Optimize and evaluate results
        //
        mincgreport rep;
        alglib::mincgoptimize(state, function1_grad);
        mincgresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,3]

        //
        // Check that OptGuard did not report errors
        //
        // NOTE: want to test OptGuard? Try breaking the gradient - say, add
        //       1.0 to some of its components.
        //
        optguardreport ogrep;
        mincgoptguardresults(state, ogrep);
        printf("%s\n", ogrep.badgradsuspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc0suspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc1suspected ? "true" : "false"); // EXPECTED: false
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_grad(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) 
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
    grad[0] = 400*pow(x[0]+3,3);
    grad[1] = 4*pow(x[1]-3,3);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
        // with nonlinear conjugate gradient method.
        //
        // Several advanced techniques are demonstrated:
        // * upper limit on step size
        // * restart from new point
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.0000000001;
        double stpmax = 0.1;
        ae_int_t maxits = 0;
        mincgstate state;
        mincgreport rep;

        // create and tune optimizer
        mincgcreate(x, state);
        mincgsetscale(state, s);
        mincgsetcond(state, epsg, epsf, epsx, maxits);
        mincgsetstpmax(state, stpmax);

        // Set up OptGuard integrity checker which catches errors
        // like nonsmooth targets or errors in the analytic gradient.
        //
        // OptGuard is essential at the early prototyping stages.
        //
        // NOTE: gradient verification needs 3*N additional function
        //       evaluations; DO NOT USE IT IN THE PRODUCTION CODE
        //       because it leads to unnecessary slowdown of your app.
        mincgoptguardsmoothness(state);
        mincgoptguardgradient(state, 0.001);

        // first run
        alglib::mincgoptimize(state, function1_grad);
        mincgresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,3]

        // second run - algorithm is restarted with mincgrestartfrom()
        x = "[10,10]";
        mincgrestartfrom(state, x);
        alglib::mincgoptimize(state, function1_grad);
        mincgresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,3]

        // check OptGuard integrity report. Why do we need it at all?
        // Well, try breaking the gradient by adding 1.0 to some
        // of its components - OptGuard should report it as error.
        // And it may also catch unintended errors too :)
        optguardreport ogrep;
        mincgoptguardresults(state, ogrep);
        printf("%s\n", ogrep.badgradsuspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc0suspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc1suspected ? "true" : "false"); // EXPECTED: false
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_func(const real_1d_array &x, double &func, void *ptr)
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x,y) = 100*(x+3)^4+(y-3)^4
        //
        // using numerical differentiation to calculate gradient.
        //
        // We also show how to use OptGuard integrity checker to catch common
        // problem statement errors like accidentally specifying nonsmooth target
        // function.
        //
        // First, we set up optimizer...
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.0000000001;
        double diffstep = 1.0e-6;
        ae_int_t maxits = 0;
        mincgstate state;
        mincgcreatef(x, diffstep, state);
        mincgsetcond(state, epsg, epsf, epsx, maxits);
        mincgsetscale(state, s);

        //
        // Then, we activate OptGuard integrity checking.
        //
        // Numerical differentiation always produces "correct" gradient
        // (with some truncation error, but unbiased). Thus, we just have
        // to check smoothness properties of the target: C0 and C1 continuity.
        //
        // Sometimes user accidentally tried to solve nonsmooth problems
        // with smooth optimizer. OptGuard helps to detect such situations
        // early, at the prototyping stage.
        //
        mincgoptguardsmoothness(state);

        //
        // Now we are ready to run the optimization
        //
        mincgreport rep;
        alglib::mincgoptimize(state, function1_func);
        mincgresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,3]

        //
        // ...and to check OptGuard integrity report.
        //
        // Want to challenge OptGuard? Try to make your problem
        // nonsmooth by replacing 100*(x+3)^4 by 100*|x+3| and
        // re-run optimizer.
        //
        optguardreport ogrep;
        mincgoptguardresults(state, ogrep);
        printf("%s\n", ogrep.nonc0suspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc1suspected ? "true" : "false"); // EXPECTED: false
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minasareport
minasastate
minasacreate
minasaiteration
minasaoptimize
minasarestartfrom
minasaresults
minasaresultsbuf
minasasetalgorithm
minasasetcond
minasasetstpmax
minasasetxrep
minbleicsetbarrierdecay
minbleicsetbarrierwidth
minlbfgssetcholeskypreconditioner
minlbfgssetdefaultpreconditioner
/************************************************************************* *************************************************************************/
class minasareport { public: minasareport(); minasareport(const minasareport &rhs); minasareport& operator=(const minasareport &rhs); virtual ~minasareport(); ae_int_t iterationscount; ae_int_t nfev; ae_int_t terminationtype; ae_int_t activeconstraints; };
/************************************************************************* *************************************************************************/
class minasastate { public: minasastate(); minasastate(const minasastate &rhs); minasastate& operator=(const minasastate &rhs); virtual ~minasastate(); };
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 25.03.2010 by Bochkanov Sergey *************************************************************************/
void minasacreate(const ae_int_t n, const real_1d_array &x, const real_1d_array &bndl, const real_1d_array &bndu, minasastate &state, const xparams _xparams = alglib::xdefault); void minasacreate(const real_1d_array &x, const real_1d_array &bndl, const real_1d_array &bndu, minasastate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool minasaiteration(minasastate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
void minasaoptimize(minasastate &state, void (*grad)(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
void minasarestartfrom(minasastate &state, const real_1d_array &x, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
void minasaresults(const minasastate &state, real_1d_array &x, minasareport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
void minasaresultsbuf(const minasastate &state, real_1d_array &x, minasareport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minasasetalgorithm(minasastate &state, const ae_int_t algotype, const xparams _xparams = alglib::xdefault);
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minasasetcond(minasastate &state, const double epsg, const double epsf, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minasasetstpmax(minasastate &state, const double stpmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* Obsolete optimization algorithm. Was replaced by MinBLEIC subpackage. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minasasetxrep(minasastate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is obsolete function which was used by previous version of the BLEIC optimizer. It does nothing in the current version of BLEIC. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetbarrierdecay(minbleicstate &state, const double mudecay, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is obsolete function which was used by previous version of the BLEIC optimizer. It does nothing in the current version of BLEIC. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minbleicsetbarrierwidth(minbleicstate &state, const double mu, const xparams _xparams = alglib::xdefault);
/************************************************************************* Obsolete function, use MinLBFGSSetCholeskyPreconditioner() instead. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgssetcholeskypreconditioner(minlbfgsstate &state, const real_2d_array &p, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Obsolete function, use MinLBFGSSetPrecDefault() instead. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgssetdefaultpreconditioner(minlbfgsstate &state, const xparams _xparams = alglib::xdefault);
mindfreport
mindfstate
mindfcreate
mindfiteration
mindfoptimize
mindfrequesttermination
mindfresults
mindfresultsbuf
mindfsetalgogdemo
mindfsetalgogdemofixed
mindfsetbc
mindfsetcondf
mindfsetcondfx
mindfsetgdemopenalty
mindfsetgdemoprofilequick
mindfsetgdemoprofilerobust
mindfsetlc2dense
mindfsetnlc2
mindfsetscale
mindfsetseed
mindfsetxrep
mindfusetimers
mindf_gdemo_auto Nonlinearly constrained differential evolution
/************************************************************************* This structure stores optimization report: * f objective value at the solution * iterationscount total number of inner iterations * nfev number of gradient evaluations * terminationtype termination type (see below) * bcerr maximum violation of box constraints * lcerr maximum violation of linear constraints * nlcerr maximum violation of nonlinear constraints If timers were activated, the structure also stores running times: * timesolver time (in seconds, stored as a floating-point value) spent in the solver itself. Time spent in the user callback is not included. See 'TIMERS' below for more information. * timecallback time (in seconds, stored as a floating-point value) spent in the user callback. See 'TIMERS' below for more information. * timetotal total time spent during the optimization, including both the solver and callbacks. See 'TIMERS' below for more information. In order to activate timers, the caller has to call mindfusetimers() function. Other fields of this structure are not documented and should not be used! TERMINATION CODES TerminationType field contains completion code, which can be: -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. -3 box constraints are inconsistent -1 inconsistent parameters were passed: * penalty parameter is zero, but we have nonlinear constraints set by mindfsetnlc2() 1 function value has converged within epsf 2 sampling radius decreased below epsx 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. 8 User requested termination via mindfrequesttermination() TIMERS Starting from ALGLIB 4.04, many optimizers report time spent in in the solver itself and in user callbacks. The time is reported in seconds, using floating-point (i.e. fractional-length intervals can be reported). In order to activate timers, the caller has to call mindfusetimers() function. The accuracy of the reported value depends on the specific programming language and OS being used: * C++, no AE_OS is #defined - the accuracy is that of time() function, i.e. one second. * C++, AE_OS=AE_WINDOWS is #defined - the accuracy is that of GetTickCount(), i.e. about 10-20ms * C++, AE_OS=AE_POSIX is #defined - the accuracy is that of gettimeofday() * C#, managed core, any OS - the accuracy is that of Environment.TickCount * C#, HPC core, any OS - the accuracy is that of a corresponding C++ version * any other language - the accuracy is that of a corresponding C++ version Whilst modern operating systems provide more accurate timers, these timers often have significant overhead or backward compatibility issues. Thus, ALGLIB stick to the most basic and efficient functions, even at the cost of some accuracy being lost. *************************************************************************/
class mindfreport { public: mindfreport(); mindfreport(const mindfreport &rhs); mindfreport& operator=(const mindfreport &rhs); virtual ~mindfreport(); double f; ae_int_t iterationscount; ae_int_t nfev; double bcerr; double lcerr; double nlcerr; ae_int_t terminationtype; double timetotal; double timesolver; double timecallback; };
/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinDF subpackage to work with this object *************************************************************************/
class mindfstate { public: mindfstate(); mindfstate(const mindfstate &rhs); mindfstate& operator=(const mindfstate &rhs); virtual ~mindfstate(); };
/************************************************************************* GLOBAL OPTIMIZATION SUBJECT TO BOX/LINEAR/NONLINEAR CONSTRAINTS The subroutine minimizes function F(x) of N arguments subject to any combination of: * bound constraints * linear inequality constraints * linear equality constraints * nonlinear generalized inequality constraints Li<=Ci(x)<=Ui, with one of Li/Ui possibly being infinite REQUIREMENTS: * F() and C() do NOT have to be differentiable, locally Lipschitz or continuous. Most solvers in this subpackage can deal with nonsmoothness or minor discontinuities, although obviously smoother problems are the most easy ones. * generally, F() and C() must be computable at any point which is feasible subject to box constraints USAGE: 1. User initializes algorithm state with mindfcreate() call and chooses specific solver to be used. There is some solver which is used by default, with default settings, but you should NOT rely on the default choice. It may change in the future releases of ALGLIB without notice, and no one can guarantee that the new solver will be able to solve your problem with default settings. 2. User adds boundary and/or linear and/or nonlinear constraints by means of calling one of the following functions: a) mindfsetbc() for boundary constraints b) mindfsetlc2dense() for linear constraints c) mindfsetnlc2() for nonlinear constraints You may combine (a), (b) and (c) in one optimization problem. 3. User sets scale of the variables with mindfsetscale() function. It is VERY important to set variable scales because many derivative-free algorithms refuse to work when variables are badly scaled. Scaling helps to seed initial population, control convergence and enforce penalties for constraint violation. 4. Finally, user calls mindfoptimize() function which takes algorithm state and pointer (delegate, etc) to callback function which calculates F and G. 5. User calls mindfresults() to get a solution INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[N]. Some solvers can utilize a good initial point to seed computations. As of ALGLIB 4.04, the initial point is: * used by GDEMO If the chosen solver does not need initial point, one can supply zeros. OUTPUT PARAMETERS: State - structure stores algorithm state IMPORTANT: the MINDF optimizer supports parallel model evaluation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates algorithms like differential evolution which usually issue batch requests to user callbacks which can be efficiently parallelized. Callback parallelism is usually beneficial when the batch evalution requires more than several milliseconds. See ALGLIB Reference Manual, 'Working with commercial version' section, and comments on mindfoptimize() function for more information. -- ALGLIB -- Copyright 24.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfcreate(const ae_int_t n, const real_1d_array &x, mindfstate &state, const xparams _xparams = alglib::xdefault); void mindfcreate(const real_1d_array &x, mindfstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool mindfiteration(mindfstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state fvec - callback which calculates function vector fi[] at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL CALLBACK PARALLELISM The MINDF optimizer supports parallel model evaluation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates optimization when using a solver which issues batch requests, i.e. multiple requests for target values, which can be computed independently by different threads. Callback parallelism is usually beneficial when processing a batch request requires more than several milliseconds. It also requires the solver which issues requests in convenient batches, e.g. the differential evolution solver. See ALGLIB Reference Manual, 'Working with commercial version' section for more information. -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfoptimize(mindfstate &state, void (*fvec)(const real_1d_array &x, real_1d_array &fi, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine submits request for termination of a running optimizer. It should be called from a user-supplied callback when user decides that it is time to "smoothly" terminate optimization process. As a result, the optimizer stops at the point which was "current accepted" when the termination request was submitted and returns error code 8 (successful termination). INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on an optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfrequesttermination(mindfstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* MinDF results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: rep.f stores objective value at the solution You should check rep.terminationtype in order to distinguish successful termination from unsuccessful one: * -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. * -3 box constraints are inconsistent * -1 inconsistent parameters were passed: * penalty parameter is zero, but we have nonlinear constraints set by MinDFsetnlc2() * 1 successful termination according to a solver-specific set of conditions * 8 User requested termination via minnsrequesttermination() If you activated timers with mindfusetimers(), you can also find out how much time was spent in various code parts: * rep.timetotal - for a total time in seconds * rep.timesolver - for a time spent in the solver itself * rep.timecallback - for a time spent in user callbacks See comments on mindfreport structure for more information about timers and their accuracy. -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfresults(const mindfstate &state, real_1d_array &x, mindfreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Buffered implementation of MinDFresults() which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfresultsbuf(const mindfstate &state, real_1d_array &x, mindfreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine sets optimization algorithm to the differential evolution solver GDEMO (Generalized Differential Evolution Multiobjective) with an automatic parameters selection. NOTE: a version with manually tuned parameters can be activated by calling the mindfsetalgogdemofixed() function. The primary stopping condition for the solver is to stop after the specified number of iterations. You can also specify additional criteria to stop early: * stop when subpopulation target values (2N+1 best individuals) are within EPS from the best one so far (function values seem to converge) * stop when both subpopulation target values AND subpopulation variable values are within EPS from the best one so far The first condition is specified with mindfsetcondf(), the second one is activated with mindfsetcondfx(). Both conditions are heuristics which may fail. Being 'within EPS from the best value so far' in practice means that we are somewhere within [0.1EPS,10EPS] from the true solution; however, on difficult problems this condition may fire too early. Imposing an additional requirement that variable values have clustered too may prevent us from premature stopping. However, on multi-extremal and/or noisy problems too many individuals may be trapped away from the optimum, preventing this condition from activation. ALGORITHM PROPERTIES: * the solver uses a variant of the adaptive parameter tuning strategy called 'Success-History Based Parameter Adaptation for Differential Evolution Ensemble' (SHADE) by Ryoji Tanabe and Alex Fukunaga. You do not have to specify crossover probability and differential weight, the solver will automatically choose the most appropriate strategy. * the solver can handle box, linear, nonlinear constraints. Linear and nonlinear constraints are handled by means of an L1/L2 penalty. The solver does not violate box constraints at any point, but may violate linear and nonlinear ones. Penalty coefficient can be changed with the mindfsetgdemopenalty() function. * the solver heavily depends on variable scales being available (specified by means of mindfsetscale() call) and on box constraints with both lower and upper bounds being available which are used to determine the search region. It will work without box constraints and without scales, but results are likely to be suboptimal. * the solver is SIMD-optimized and parallelized (in commercial ALGLIB editions), with nearly linear scalability of parallel processing. * this solver is intended for finding solutions with up to several digits of precision at best. Its primary purpose is to find at least some solution to an otherwise intractable problem. IMPORTANT: derivative-free optimization is inherently less robust than the smooth nonlinear programming, especially when nonsmoothness and discontinuities are present. Derivative-free algorithms have less convergence guarantees than their smooth counterparts. It is considered a normal (although, obviously, undesirable) situation when a derivative- -free algorithm fails to converge with desired precision. Having 2 digits of accurasy is already a good result, on difficult problems (high numerical noise, discontinuities) you may have even less than that. INPUT PARAMETERS: State - solver EpochsCnt - iterations count, >0. Usually the algorithm needs hundreds of iterations to converge. PopSize - population size, >=0. Zero value means that the default value (which is 10*N in the current version) will be chosen. Good values are in 5*N...20*N, with the smaller values being recommended for easy problems and the larger values for difficult multi-extremal and/or noisy tasks. -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfsetalgogdemo(mindfstate &state, const ae_int_t epochscnt, const ae_int_t popsize, const xparams _xparams = alglib::xdefault); void mindfsetalgogdemo(mindfstate &state, const ae_int_t epochscnt, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine sets optimization algorithm to the differential evolution solver GDEMO (Generalized Differential Evolution Multiobjective) with the manual parameters selection. Unlike DE with an automatic parameters selection this function requires user to manually specify algorithm parameters. In the general case the full-auto GDEMO is better. However, it has to spend some time finding out properties of a problem being solved; furthermore, it is not allowed to try potentially dangerous values of parameters that lead to premature stopping. Manually tuning the solver to the specific problem at hand can get 2x-3x better running time. Aside from that, the algorithm is fully equivalent to automatic GDEMO, and we recommend you reading comments on mindfsetalgogdemo() for more information about algorithm properties and stopping criteria. INPUT PARAMETERS: State - solver EpochsCnt - iterations count, >0. Usually the algorithm needs hundreds of iterations to converge. Strategy - specific DE strategy to use: * 0 for DE/rand/1 * 1 for DE/best/2 * 2 for DE/current-to-best/1 CrossoverProb- crossover probability, 0<CrossoverProb<1 DifferentialWeight- weight, 0<DifferentialWeight<2 PopSize - population size, >=0. Zero value means that the default value (which is 10*N in the current version) will be chosen. Good values are in 5*N...20*N, with the smaller values being recommended for easy problems and the larger values for difficult multi-extremal and/or noisy tasks. -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfsetalgogdemofixed(mindfstate &state, const ae_int_t epochscnt, const ae_int_t strategy, const double crossoverprob, const double differentialweight, const ae_int_t popsize, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets box constraints. Box constraints are inactive by default (after initial creation). INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF. BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF. -- ALGLIB -- Copyright 24.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfsetbc(mindfstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping condition for the optimizer: function values has converged to a neighborhood whose size is proportional to epsF. Most derivarive-free solvers are heuristics, so the code used to implement this stopping condition is an heuristic too. Usually 'proportional to EPS' means that we are somewhere between Eps/10...Eps*10 away from the solution. However, there are no warranties that the solver has actually converged to something, although in practice it works well. The specific meaning of 'converging' is algorithm-dependent. It is possible that some future ALGLIB optimizers will ignore this condition, see comments on specific solvers for more info. This condition does not work for multi-objective problems. INPUT PARAMETERS: State - structure which stores algorithm state EpsF - >=0: * zero value means no condition for F * EpsF>0 means stopping when the solver converged with an error estimate less than EpsF*max(|F|,1) -- ALGLIB -- Copyright 23.04.2024 by Bochkanov Sergey *************************************************************************/
void mindfsetcondf(mindfstate &state, const double epsf, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping conditions for the optimizer. This function sets a combined stopping condition: stopping when two criteria are met simultaneously: * function values has converged to a neighborhood whose size is proportional to epsF * variable values has converged to a neighborhood whose size is proportional to epsX It is possible to use only one condition by setting another EPS to zero. Most derivarive-free solvers are heuristics, so the code used to implement this stopping condition is an heuristic too. Usually 'proportional to EPS' means that we are somewhere between Eps/10...Eps*10 away from the solution. However, there are no warranties that the solver has actually converged to something, although in practice it works well. The specific meaning of 'converging' is algorithm-dependent. It is possible that some future ALGLIB optimizers will ignore this condition, see comments on specific solvers for more info. This condition does not work for multi-objective problems. INPUT PARAMETERS: State - structure which stores algorithm state EpsF - >=0: * zero value means no condition for F * EpsF>0 means stopping when the solver converged with an error estimate less than EpsF*max(|F|,1) EpsX - >=0: * zero value means no condition for X * EpsX>0 means stopping when the solver converged with error in I-th variable less than EpsX*S[i], where S[i] is a variable scale -- ALGLIB -- Copyright 23.04.2024 by Bochkanov Sergey *************************************************************************/
void mindfsetcondfx(mindfstate &state, const double epsf, const double epsx, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine tells GDEMO differential evolution optimizer to handle linear/nonlinear constraints by an L1/L2 penalty function. IMPORTANT: this function does NOT change the optimization algorithm. If want to activate differential evolution solver, you still have to call a proper mindfsetalgo???() function. INPUT PARAMETERS: State - solver Rho1, Rho2 - penalty parameters for constraint violations: * Rho1 is a multiplier for L1 penalty * Rho2 is a multiplier for L2 penalty * Rho1,Rho2>=0 * having both of them at zero means that some default value will be chosen. Ignored for problems with box-only constraints. L1 penalty is usually better at enforcing constraints, but leads to slower convergence than L2 penalty. It is possible to combine both kinds of penalties together. There is a compromise between constraint satisfaction and optimality: high values of Rho mean that constraints are satisfied with high accuracy but that the target may be underoptimized due to numerical difficulties. Small values of Rho mean that the solution may grossly violate constraints. Choosing good Rho is usually a matter of trial and error. -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfsetgdemopenalty(mindfstate &state, const double rho1, const double rho2, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine tells GDEMO differential evolution optimizer to use a QUICK profile. The QUICK profile is intended to facilitate accelerated convergence on medium-complexity problems at the cost of (sometimes) having premature convergence on difficult and/or multi-extremal problems. The ROBUST profile can be selected if you favor convergence warranties over speed. In most cases, the ROBUST profile is ~2x-3x slower than the QUICK one. This function has effect only on adaptive GDEMO with automatic parameters selection. It has no effect on fixed-parameters GDEMO or any other solvers. IMPORTANT: this function does NOT change the optimization algorithm. If you want to activate differential evolution solver, you still have to call a proper mindfsetalgo???() function. INPUT PARAMETERS: State - solver -- ALGLIB -- Copyright 25.04.2024 by Bochkanov Sergey *************************************************************************/
void mindfsetgdemoprofilequick(mindfstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine tells GDEMO differential evolution optimizer to use a ROBUST profile (the default option). The ROBUST profile is intended to facilitate explorative behavior and robust convergence even on difficult multi-extremal problems. It comes at the expense of increased running time even on easy problems. The QUICK profile can be chosen if your problem is relatively easy to handle and you prefer speed over robustness. In most cases, the QUICK profile is ~2x-3x faster than the robust one. This function has effect only on adaptive GDEMO with automatic parameters selection. It has no effect on fixed-parameters GDEMO or any other solvers. IMPORTANT: this function does NOT change the optimization algorithm. If want to activate differential evolution solver, you still have to call a proper mindfsetalgo???() function. INPUT PARAMETERS: State - solver -- ALGLIB -- Copyright 25.04.2024 by Bochkanov Sergey *************************************************************************/
void mindfsetgdemoprofilerobust(mindfstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with dense constraint matrix A. INPUT PARAMETERS: State - structure previously allocated with MinDFcreate() call. A - linear constraints, array[K,N]. Each row of A represents one constraint. One-sided inequality constraints, two- sided inequality constraints, equality constraints are supported (see below) AL, AU - lower and upper bounds, array[K]; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0; if not given, inferred from sizes of A, AL, AU. -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfsetlc2dense(mindfstate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t k, const xparams _xparams = alglib::xdefault); void mindfsetlc2dense(mindfstate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets two-sided nonlinear constraints. In fact, this function sets only NUMBER of the nonlinear constraints. Constraints themselves (constraint functions) are passed to the MinDFOptimize() method. This method accepts user-defined vector function F[] where: * first component of F[] corresponds to the objective * subsequent NNLC components of F[] correspond to the two-sided nonlinear constraints NL<=C(x)<=NU, where * NL[i]=NU[i] => I-th row is an equality constraint Ci(x)=NL * NL[i]<NU[i] => I-th tow is a two-sided constraint NL[i]<=Ci(x)<=NU[i] * NL[i]=-INF => I-th row is an one-sided constraint Ci(x)<=NU[i] * NU[i]=+INF => I-th row is an one-sided constraint NL[i]<=Ci(x) * NL[i]=-INF, NU[i]=+INF => constraint is ignored NOTE: you may combine nonlinear constraints with linear/boundary ones. If your problem has mixed constraints, you may explicitly specify some of them as linear or box ones. It helps optimizer to handle them more efficiently. INPUT PARAMETERS: State - structure previously allocated with MinDFCreate call. NL - array[NNLC], lower bounds, can contain -INF NU - array[NNLC], lower bounds, can contain +INF NNLC - constraints count, NNLC>=0 NOTE 1: nonlinear constraints are satisfied only approximately! It is possible that algorithm will evaluate function outside of feasible area! NOTE 2: the algorithm scales variables according to the scale specified by MinDFSetScale() function, so it can handle problems with badly scaled variables (as long as we KNOW their scales). However, there is no way to automatically scale nonlinear constraints. Inappropriate scaling of nonlinear constraints may ruin convergence. Solving problem with constraint "1000*G0(x)=0" is NOT the same as solving it with constraint "0.001*G0(x)=0". It means that YOU are the one who is responsible for the correct scaling of the nonlinear constraints. We recommend you to scale nonlinear constraints in such a way that the derivatives (if constraints are differentiable) have approximately unit magnitude (for problems with unit variable scales) or have magnitudes approximately equal to 1/S[i] (where S is a variable scale set by MinDFSetScale() function). -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfsetnlc2(mindfstate &state, const real_1d_array &nl, const real_1d_array &nu, const ae_int_t nnlc, const xparams _xparams = alglib::xdefault); void mindfsetnlc2(mindfstate &state, const real_1d_array &nl, const real_1d_array &nu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets variable scales. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances) and to guide algorithm steps. The scale of a variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 25.07.2023 by Bochkanov Sergey *************************************************************************/
void mindfsetscale(mindfstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets seed used by internal RNG. By default, a random seed is used, i.e. every time you run the solver, we seed its generator with a new value obtained from the system-wide RNG. Thus, the solver returns non-deterministic results. You can change such a behavior by specifying a fixed positive seed value. INPUT PARAMETERS: S - optimizer structure SeedVal - seed value: * positive values are used for seeding RNG with a fixed seed, i.e. subsequent runs on the same objective will return the same results * non-positive seed means that a random seed is used for every run, i.e. subsequent runs on the same objective will return slightly different results -- ALGLIB -- Copyright 26.04.2024 by Bochkanov Sergey *************************************************************************/
void mindfsetseed(mindfstate &s, const ae_int_t seedval, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinDFOptimize(). NOTE: algorithm passes two parameters to rep() callback - the best point so far and a function value at the point. For unconstrained problems the function value is non-increasing (the most recent best point is always at least not worse than the previous best one). However, it can increase between iterations when solving constrained problems (a better point may have higher objective value but smaller constraint violation). -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void mindfsetxrep(mindfstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function activates/deactivates internal timers used to track time spent in various parts of the solver (mostly, callbacks vs solver itself). When activated with this function, the following timings are stored in the mindfreport structure fields: * total time spend in the optimization * time spent in the callback * time spent in the solver itself See comments on mindfreport structure for more information about timers and their accuracy. Timers are an essential part of reports that helps to find out where the most time is spent and how to optimize the code. E.g., noticing that significant amount of time is spent in numerical differentiation makes obvious that ALGLIB-provided parallel numerical differentiation is needed. However, time measurements add noticeable overhead, about 50-100ns per function call. In some applications it results in a significant slowdown, that's why this option is inactive by default and should be manually activated. INPUT PARAMETERS: State - structure which stores algorithm state UseTimers- true or false NOTE: when tracing is turned on with alglib::trace_file(), some derivative free solvers may also perform internal, more detailed time measurements, which are printed to the log file. -- ALGLIB -- Copyright 23.04.2024 by Bochkanov Sergey *************************************************************************/
void mindfusetimers(mindfstate &state, const bool usetimers, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nlcfunc2_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1,x2) = x0+x1
    //     f1(x0,x1,x2) = x2-exp(x0)
    //     f2(x0,x1,x2) = x0^2+x1^2-1
    //
    fi[0] = x[0]+x[1];
    fi[1] = x[2]-exp(x[0]);
    fi[2] = x[0]*x[0] + x[1]*x[1] - 1.0;
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = x0+x1
        //
        // subject to nonlinear constraints
        //
        //    x0^2 + x1^2 - 1 <= 0
        //    x2-exp(x0) = 0
        //
        real_1d_array x0 = "[0,0,0]";
        real_1d_array s = "[1,1,1]";
        mindfstate state;
        mindfreport rep;
        real_1d_array x1;

        //
        // Create optimizer object
        //
        mindfcreate(x0, state);
        mindfsetscale(state, s);

        //
        // Choose  one of the nonlinear  programming  solvers  supported  by  MINDF
        // optimizer.
        //
        // This example shows how to use GDEMO (Generalized Differential Evolution,
        // MultiObjective) solver working in a single-objective mode. This solver
        // uses an adaptive choice of DE parameters (crossover, weight and strategy),
        // automatically choosing the most appropriate settings during the optimization.
        //
        // Thus, the only tunable parameters are:
        // * iterations count
        // * population size
        // * algorithm profile (ROBUST or QUICK)
        //
        // The latter two parameters can be omitted, the solver will use a default
        // population size (10*N in the current version) and a default profile (ROBUST
        // one).
        //
        // Metaheuristics typically need hundreds or thousands of iterations to converge,
        // in this example we set 200 iterations. Good values for population size
        // are between 5*N and 20*N, with 5*N being recommended for easy problems
        // and 20*N (or more) being recommended for difficult problems with multiple
        // extrema, many cross-interacting variables and/or noise.
        // 
        // In addition to these parameters it is also possible to specify a so-called
        // 'profile' - a set of recommendations regarding decisions algorithm is allowed
        // to make during parameters autotuning:
        // * the ROBUST profile is a default option. The solver tries only conservative
        //   strategies that have low probability of a failure (stagnation far away
        //   from the solution)
        // * the QUICK profile is intended  to  facilitate accelerated  convergence on
        //   medium-complexity problems at the cost of (sometimes) having premature
        //   convergence on difficult multi-extremal problems. It most often results
        //   in a 2x-3x higher convergence speed than the ROBUST profile.
        //
        // In this simple example we do not provide variable bounds and scales; however,
        // on real life problems it is very important to box as many variables as possible
        // (it helps to generate initial population and to avoid bad parameter values)
        // and to provide variable scales. Differential Evolution itself is scale-invariant;
        // however, penalties for constraint violation are scale-dependent.
        //
        ae_int_t maxits = 200;
        mindfsetalgogdemo(state, maxits);
        mindfsetgdemoprofilerobust(state);

        //
        // Unlike smooth solvers, metaheuristics do not have well-defined stopping criteria.
        // It is recommended to run the solver until iterations budget is exhausted.
        //
        // However, it is possible to let the solver stop early, when either:
        // * subpopulation target values (2N+1 best individuals) are within
        //   EPS from the best one so far (function values seem to converge)
        // * or 2N+1-subpopulation target values AND variable values are within EPS from
        //   the best solution so far
        //
        // Both conditions are heuristics that may fail. The fact that many candidate
        // objective values have clustered within EPS of the best objective value so far
        // usually means that we are somewhere within [0.1EPS,10EPS] away from the true
        // solution; however, on difficult problems this condition may fire too early.
        //
        // Imposing an additional requirement that variable values have clustered  too
        // may prevent us from premature  stopping. However, on multi-extremal and/or
        // noisy problems too many individuals may be trapped away from the  optimum,
        // preventing this condition from activation.
        //
        // The summary is that stopping criteria are heavily problem-dependent.
        //
        mindfsetcondf(state, 0.00001);

        //
        // Set nonlinear constraints.
        //
        // ALGLIB  supports  any  combination  of  box,  linear  and  nonlinear
        // constraints. This specific example uses only nonlinear ones.
        //
        // Since  version  4.01,  ALGLIB  supports  the  most  general  form of
        // nonlinear constraints: two-sided   constraints  NL<=C(x)<=NU,   with
        // elements being possibly infinite (means that this specific bound  is
        // ignored). It includes equality constraints,  upper/lower  inequality
        // constraints, range constraints. In particular, a pair of constraints
        //
        //        x2-exp(x0)       = 0
        //        x0^2 + x1^2 - 1 <= 0
        //
        // can be specified by passing NL=[0,-INF], NU=[0,0] to mindfsetnlc2().
        // Constraining functions themselves are passed as a part  of a problem
        // target vector (see below).
        //
        //
        // Unlike smooth optimizers like SQP which naturally include linear and
        // nonlinear constraints into the  algorithm,  derivative-free  methods
        // often need special strategies to deal with them, with each  strategy
        // having its own limitations:
        //
        // * an L2 penalty,  which  has  good  global  constraint   enforcement
        //   properties, but usually allows some moderate constraint violation
        //
        // * an L1 penalty, which has potential to enforce constraints exactly,
        //   but has somewhat weaker ability to move iterations from  far  away
        //   points closer to the feasible area. It also  has  somewhat  harder
        //   numerical properties, needing more iterations to converge.
        //
        // * a combined L1/L2 penalty, which is a good compromise
        //
        // The code below sets constraints bounds and tells the solver  to  use
        // a mixed L1/L2 penalized strategy.
        //
        // NOTE: box constraints require no special handling.
        //
        real_1d_array nl = "[0,-inf]";
        real_1d_array nu = "[0,0]";
        double rho1 = 5;
        double rho2 = 5;
        mindfsetnlc2(state, nl, nu);
        mindfsetgdemopenalty(state, rho1, rho2);

        //
        // Optimize and test results.
        //
        // The optimizer object accepts vector function  with  its  first  component
        // being a target and subsequent components being nonlinear constraints.
        //
        // So, our vector function has the following form
        //
        //     {f0,f1,f2} = { x0+x1 , x2-exp(x0) , x0^2+x1^2-1 }
        //
        // with f0 being target function, f1 being equality constraint "f1=0",
        // f2 being inequality constraint "f2<=0".
        //
        //
        //
        // The commercial ALGLIB has two important improvements over the free edition:
        // * callback parallelism
        // * SIMD kernels for candidate points generation
        //
        // Differential evolution evaluates objectives/constraints in large batches.
        // When the batch takes more than several milliseconds to process, it makes
        // sense to compute function values at different points in parallel. It is
        // called 'callback parallelism'.
        //
        // Another useful performance improvement is an ability to utilize SIMD for
        // massive random numbers generation and mutation/crossover. The performance
        // impact of these operations can be noticeable when solving problems with
        // cheap objectives.
        //
        alglib::mindfoptimize(state, nlcfunc2_fvec);
        mindfresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [-0.70710,-0.70710,0.49306]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minlbfgsreport
minlbfgsstate
minlbfgscreate
minlbfgscreatef
minlbfgsiteration
minlbfgsoptguardgradient
minlbfgsoptguardnonc1test0results
minlbfgsoptguardnonc1test1results
minlbfgsoptguardresults
minlbfgsoptguardsmoothness
minlbfgsoptimize
minlbfgsrequesttermination
minlbfgsrestartfrom
minlbfgsresults
minlbfgsresultsbuf
minlbfgssetcond
minlbfgssetpreccholesky
minlbfgssetprecdefault
minlbfgssetprecdiag
minlbfgssetprecscale
minlbfgssetscale
minlbfgssetstpmax
minlbfgssetxrep
minlbfgs_d_1 Nonlinear optimization by L-BFGS
minlbfgs_d_2 Nonlinear optimization with additional settings and restarts
minlbfgs_numdiff Nonlinear optimization by L-BFGS with numerical differentiation
/************************************************************************* This structure stores optimization report: * IterationsCount total number of inner iterations * NFEV number of gradient evaluations * TerminationType termination type (see below) TERMINATION CODES TerminationType field contains completion code, which can be: -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. 1 relative function improvement is no more than EpsF. 2 relative step is no more than EpsX. 4 gradient norm is no more than EpsG 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. 8 terminated by user who called minlbfgsrequesttermination(). X contains point which was "current accepted" when termination request was submitted. Other fields of this structure are not documented and should not be used! *************************************************************************/
class minlbfgsreport { public: minlbfgsreport(); minlbfgsreport(const minlbfgsreport &rhs); minlbfgsreport& operator=(const minlbfgsreport &rhs); virtual ~minlbfgsreport(); ae_int_t iterationscount; ae_int_t nfev; ae_int_t terminationtype; };
/************************************************************************* *************************************************************************/
class minlbfgsstate { public: minlbfgsstate(); minlbfgsstate(const minlbfgsstate &rhs); minlbfgsstate& operator=(const minlbfgsstate &rhs); virtual ~minlbfgsstate(); };
/************************************************************************* LIMITED MEMORY BFGS METHOD FOR LARGE SCALE OPTIMIZATION DESCRIPTION: The subroutine minimizes function F(x) of N arguments by using a quasi- Newton method (LBFGS scheme) which is optimized to use a minimum amount of memory. The subroutine generates the approximation of an inverse Hessian matrix by using information about the last M steps of the algorithm (instead of N). It lessens a required amount of memory from a value of order N^2 to a value of order 2*N*M. REQUIREMENTS: Algorithm will request following information during its operation: * function value F and its gradient G (simultaneously) at given point X USAGE: 1. User initializes algorithm state with MinLBFGSCreate() call 2. User tunes solver parameters with MinLBFGSSetCond() MinLBFGSSetStpMax() and other functions 3. User calls MinLBFGSOptimize() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G. 4. User calls MinLBFGSResults() to get solution 5. Optionally user may call MinLBFGSRestartFrom() to solve another problem with same N/M but another starting point and/or another function. MinLBFGSRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - problem dimension. N>0 M - number of corrections in the BFGS scheme of Hessian approximation update. Recommended value: 3<=M<=7. The smaller value causes worse convergence, the bigger will not cause a considerably better convergence, but will cause a fall in the performance. M<=N. X - initial solution approximation, array[0..N-1]. OUTPUT PARAMETERS: State - structure which stores algorithm state IMPORTANT: the LBFGS optimizer supports parallel parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions greatly accelerates optimization with numerical differentiation of an expensive target functions. Callback parallelism is usually beneficial when computing a numerical gradient requires more than several milliseconds. See ALGLIB Reference Manual, 'Working with commercial version' section, and comments on minlbfgsoptimize() function for more information. NOTES: 1. you may tune stopping conditions with MinLBFGSSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLBFGSSetStpMax() function to bound algorithm's steps. However, L-BFGS rarely needs such a tuning. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgscreate(const ae_int_t n, const ae_int_t m, const real_1d_array &x, minlbfgsstate &state, const xparams _xparams = alglib::xdefault); void minlbfgscreate(const ae_int_t m, const real_1d_array &x, minlbfgsstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* The subroutine is finite difference variant of MinLBFGSCreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on MinLBFGSCreate() in order to get more information about creation of LBFGS optimizer. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X M - number of corrections in the BFGS scheme of Hessian approximation update. Recommended value: 3<=M<=7. The smaller value causes worse convergence, the bigger will not cause a considerably better convergence, but will cause a fall in the performance. M<=N. X - starting point, array[0..N-1]. DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure which stores algorithm state IMPORTANT: the LBFGS optimizer supports parallel parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions greatly accelerates optimization with numerical differentiation of an expensive target functions. Callback parallelism is usually beneficial when computing a numerical gradient requires more than several milliseconds. See ALGLIB Reference Manual, 'Working with commercial version' section, and comments on minlbfgsoptimize() function for more information. NOTES: 1. algorithm uses 4-point central formula for differentiation. 2. differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is scaling vector which can be set by MinLBFGSSetScale() call. 3. we recommend you to use moderate values of differentiation step. Too large step will result in too large truncation errors, while too small step will result in too large numerical errors. 1.0E-6 can be good value to start with. 4. Numerical differentiation is very inefficient - one gradient calculation needs 4*N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. LBFGS needs exact gradient values. Imprecise gradient may slow down convergence, especially on highly nonlinear problems. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 16.05.2011 by Bochkanov Sergey *************************************************************************/
void minlbfgscreatef(const ae_int_t n, const ae_int_t m, const real_1d_array &x, const double diffstep, minlbfgsstate &state, const xparams _xparams = alglib::xdefault); void minlbfgscreatef(const ae_int_t m, const real_1d_array &x, const double diffstep, minlbfgsstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool minlbfgsiteration(minlbfgsstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function activates/deactivates verification of the user-supplied analytic gradient. Upon activation of this option OptGuard integrity checker performs numerical differentiation of your target function at the initial point (note: future versions may also perform check at the final point) and compares numerical gradient with analytic one provided by you. If difference is too large, an error flag is set and optimization session continues. After optimization session is over, you can retrieve the report which stores both gradients and specific components highlighted as suspicious by the OptGuard. The primary OptGuard report can be retrieved with minlbfgsoptguardresults(). IMPORTANT: gradient check is a high-overhead option which will cost you about 3*N additional function evaluations. In many cases it may cost as much as the rest of the optimization session. YOU SHOULD NOT USE IT IN THE PRODUCTION CODE UNLESS YOU WANT TO CHECK DERIVATIVES PROVIDED BY SOME THIRD PARTY. NOTE: unlike previous incarnation of the gradient checking code, OptGuard does NOT interrupt optimization even if it discovers bad gradient. INPUT PARAMETERS: State - structure used to store algorithm state TestStep - verification step used for numerical differentiation: * TestStep=0 turns verification off * TestStep>0 activates verification You should carefully choose TestStep. Value which is too large (so large that function behavior is non- cubic at this scale) will lead to false alarms. Too short step will result in rounding errors dominating numerical derivative. You may use different step for different parameters by means of setting scale with minlbfgssetscale(). === EXPLANATION ========================================================== In order to verify gradient algorithm performs following steps: * two trial steps are made to X[i]-TestStep*S[i] and X[i]+TestStep*S[i], where X[i] is i-th component of the initial point and S[i] is a scale of i-th parameter * F(X) is evaluated at these trial points * we perform one more evaluation in the middle point of the interval * we build cubic model using function values and derivatives at trial points and we compare its prediction with actual value in the middle point -- ALGLIB -- Copyright 15.06.2014 by Bochkanov Sergey *************************************************************************/
void minlbfgsoptguardgradient(minlbfgsstate &state, const double teststep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #0 Nonsmoothness (non-C1) test #0 studies function values (not gradient!) obtained during line searches and monitors behavior of the directional derivative estimate. This test is less powerful than test #1, but it does not depend on the gradient values and thus it is more robust against artifacts introduced by numerical differentiation. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], f[] - arrays of length CNT which store step lengths and function values at these points; f[i] is evaluated in x0+stp[i]*d. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #0 "strong" report lngrep - C1 test #0 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minlbfgsoptguardnonc1test0results(const minlbfgsstate &state, optguardnonc1test0report &strrep, optguardnonc1test0report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #1 Nonsmoothness (non-C1) test #1 studies individual components of the gradient computed during line search. When precise analytic gradient is provided this test is more powerful than test #0 which works with function values and ignores user-provided gradient. However, test #0 becomes more powerful when numerical differentiation is employed (in such cases test #1 detects higher levels of numerical noise and becomes too conservative). This test also tells specific components of the gradient which violate C1 continuity, which makes it more informative than #0, which just tells that continuity is violated. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * vidx - is an index of the variable in [0,N) with nonsmooth derivative * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], g[] - arrays of length CNT which store step lengths and gradient values at these points; g[i] is evaluated in x0+stp[i]*d and contains vidx-th component of the gradient. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #1 "strong" report lngrep - C1 test #1 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minlbfgsoptguardnonc1test1results(minlbfgsstate &state, optguardnonc1test1report &strrep, optguardnonc1test1report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Results of OptGuard integrity check, should be called after optimization session is over. === PRIMARY REPORT ======================================================= OptGuard performs several checks which are intended to catch common errors in the implementation of nonlinear function/gradient: * incorrect analytic gradient * discontinuous (non-C0) target functions (constraints) * nonsmooth (non-C1) target functions (constraints) Each of these checks is activated with appropriate function: * minlbfgsoptguardgradient() for gradient verification * minlbfgsoptguardsmoothness() for C0/C1 checks Following flags are set when these errors are suspected: * rep.badgradsuspected, and additionally: * rep.badgradvidx for specific variable (gradient element) suspected * rep.badgradxbase, a point where gradient is tested * rep.badgraduser, user-provided gradient (stored as 2D matrix with single row in order to make report structure compatible with more complex optimizers like MinNLC or MinLM) * rep.badgradnum, reference gradient obtained via numerical differentiation (stored as 2D matrix with single row in order to make report structure compatible with more complex optimizers like MinNLC or MinLM) * rep.nonc0suspected * rep.nonc1suspected === ADDITIONAL REPORTS/LOGS ============================================== Several different tests are performed to catch C0/C1 errors, you can find out specific test signaled error by looking to: * rep.nonc0test0positive, for non-C0 test #0 * rep.nonc1test0positive, for non-C1 test #0 * rep.nonc1test1positive, for non-C1 test #1 Additional information (including line search logs) can be obtained by means of: * minlbfgsoptguardnonc1test0results() * minlbfgsoptguardnonc1test1results() which return detailed error reports, specific points where discontinuities were found, and so on. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: rep - generic OptGuard report; more detailed reports can be retrieved with other functions. NOTE: false negatives (nonsmooth problems are not identified as nonsmooth ones) are possible although unlikely. The reason is that you need to make several evaluations around nonsmoothness in order to accumulate enough information about function curvature. Say, if you start right from the nonsmooth point, optimizer simply won't get enough data to understand what is going wrong before it terminates due to abrupt changes in the derivative. It is also possible that "unlucky" step will move us to the termination too quickly. Our current approach is to have less than 0.1% false negatives in our test examples (measured with multiple restarts from random points), and to have exactly 0% false positives. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minlbfgsoptguardresults(minlbfgsstate &state, optguardreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function activates/deactivates nonsmoothness monitoring option of the OptGuard integrity checker. Smoothness monitor silently observes solution process and tries to detect ill-posed problems, i.e. ones with: a) discontinuous target function (non-C0) b) nonsmooth target function (non-C1) Smoothness monitoring does NOT interrupt optimization even if it suspects that your problem is nonsmooth. It just sets corresponding flags in the OptGuard report which can be retrieved after optimization is over. Smoothness monitoring is a moderate overhead option which often adds less than 1% to the optimizer running time. Thus, you can use it even for large scale problems. NOTE: OptGuard does NOT guarantee that it will always detect C0/C1 continuity violations. First, minor errors are hard to catch - say, a 0.0001 difference in the model values at two sides of the gap may be due to discontinuity of the model - or simply because the model has changed. Second, C1-violations are especially difficult to detect in a noninvasive way. The optimizer usually performs very short steps near the nonsmoothness, and differentiation usually introduces a lot of numerical noise. It is hard to tell whether some tiny discontinuity in the slope is due to real nonsmoothness or just due to numerical noise alone. Our top priority was to avoid false positives, so in some rare cases minor errors may went unnoticed (however, in most cases they can be spotted with restart from different initial point). INPUT PARAMETERS: state - algorithm state level - monitoring level: * 0 - monitoring is disabled * 1 - noninvasive low-overhead monitoring; function values and/or gradients are recorded, but OptGuard does not try to perform additional evaluations in order to get more information about suspicious locations. === EXPLANATION ========================================================== One major source of headache during optimization is the possibility of the coding errors in the target function/constraints (or their gradients). Such errors most often manifest themselves as discontinuity or nonsmoothness of the target/constraints. Another frequent situation is when you try to optimize something involving lots of min() and max() operations, i.e. nonsmooth target. Although not a coding error, it is nonsmoothness anyway - and smooth optimizers usually stop right after encountering nonsmoothness, well before reaching solution. OptGuard integrity checker helps you to catch such situations: it monitors function values/gradients being passed to the optimizer and tries to errors. Upon discovering suspicious pair of points it raises appropriate flag (and allows you to continue optimization). When optimization is done, you can study OptGuard result. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minlbfgsoptguardsmoothness(minlbfgsstate &state, const ae_int_t level, const xparams _xparams = alglib::xdefault); void minlbfgsoptguardsmoothness(minlbfgsstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x grad - callback which calculates function (or merit function) value func and gradient grad at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL CALLBACK PARALLELISM: The LBFGS optimizer supports parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates numerical differentiation of expensive targets. Callback parallelism is usually beneficial computing a numerical gradient requires more than several milliseconds. In this case the job of computing individual gradient components can be split between multiple threads. Even inexpensive targets can benefit from parallelism, if you have many variables. ALGLIB Reference Manual, 'Working with commercial version' section, tells how to activate callback parallelism for your programming language. CALLBACKS ACCEPTED 1. This function has two different implementations: one which uses exact (analytical) user-supplied gradient, and one which uses function value only and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object (either MinLBFGSCreate() for analytical gradient or MinLBFGSCreateF() for numerical differentiation) you should choose appropriate variant of MinLBFGSOptimize() - one which accepts function AND gradient or one which accepts function ONLY. Be careful to choose variant of MinLBFGSOptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to MinLBFGSOptimize() and specific function used to create optimizer. | USER PASSED TO MinLBFGSOptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ MinLBFGSCreateF() | work FAIL MinLBFGSCreate() | FAIL work Here "FAIL" denotes inappropriate combinations of optimizer creation function and MinLBFGSOptimize() version. Attemps to use such combination (for example, to create optimizer with MinLBFGSCreateF() and to pass gradient information to MinCGOptimize()) will lead to exception being thrown. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
void minlbfgsoptimize(minlbfgsstate &state, void (*func)(const real_1d_array &x, double &func, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void minlbfgsoptimize(minlbfgsstate &state, void (*grad)(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This subroutine submits request for termination of running optimizer. It should be called from user-supplied callback when user decides that it is time to "smoothly" terminate optimization process. As result, optimizer stops at point which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 08.10.2014 by Bochkanov Sergey *************************************************************************/
void minlbfgsrequesttermination(minlbfgsstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts LBFGS algorithm from new point. All optimization parameters are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure used to store algorithm state X - new starting point. -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgsrestartfrom(minlbfgsstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* L-BFGS algorithm results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. * -2 rounding errors prevent further improvement. X contains best point found. * -1 incorrect parameters were specified * 1 relative function improvement is no more than EpsF. * 2 relative step is no more than EpsX. * 4 gradient norm is no more than EpsG * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * 8 terminated by user who called minlbfgsrequesttermination(). X contains point which was "current accepted" when termination request was submitted. * Rep.IterationsCount contains iterations count * NFEV countains number of function calculations -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgsresults(const minlbfgsstate &state, real_1d_array &x, minlbfgsreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* L-BFGS algorithm results Buffered implementation of MinLBFGSResults which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 20.08.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgsresultsbuf(const minlbfgsstate &state, real_1d_array &x, minlbfgsreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping conditions for L-BFGS optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state EpsG - >=0 The subroutine finishes its work if the condition |v|<EpsG is satisfied, where: * |.| means Euclidian norm * v - scaled gradient vector, v[i]=g[i]*s[i] * g - gradient * s - scaling coefficients set by MinLBFGSSetScale() EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} is satisfied. EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinLBFGSSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsG=0, EpsF=0, EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgssetcond(minlbfgsstate &state, const double epsg, const double epsf, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* Modification of the preconditioner: Cholesky factorization of approximate Hessian is used. INPUT PARAMETERS: State - structure which stores algorithm state P - triangular preconditioner, Cholesky factorization of the approximate Hessian. array[0..N-1,0..N-1], (if larger, only leading N elements are used). IsUpper - whether upper or lower triangle of P is given (other triangle is not referenced) After call to this function preconditioner is changed to P (P is copied into the internal buffer). NOTE: you can change preconditioner "on the fly", during algorithm iterations. NOTE 2: P should be nonsingular. Exception will be thrown otherwise. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgssetpreccholesky(minlbfgsstate &state, const real_2d_array &p, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modification of the preconditioner: default preconditioner (simple scaling, same for all elements of X) is used. INPUT PARAMETERS: State - structure which stores algorithm state NOTE: you can change preconditioner "on the fly", during algorithm iterations. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgssetprecdefault(minlbfgsstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modification of the preconditioner: diagonal of approximate Hessian is used. INPUT PARAMETERS: State - structure which stores algorithm state D - diagonal of the approximate Hessian, array[0..N-1], (if larger, only leading N elements are used). NOTE: you can change preconditioner "on the fly", during algorithm iterations. NOTE 2: D[i] should be positive. Exception will be thrown otherwise. NOTE 3: you should pass diagonal of approximate Hessian - NOT ITS INVERSE. -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgssetprecdiag(minlbfgsstate &state, const real_1d_array &d, const xparams _xparams = alglib::xdefault);
/************************************************************************* Modification of the preconditioner: scale-based diagonal preconditioning. This preconditioning mode can be useful when you don't have approximate diagonal of Hessian, but you know that your variables are badly scaled (for example, one variable is in [1,10], and another in [1000,100000]), and most part of the ill-conditioning comes from different scales of vars. In this case simple scale-based preconditioner, with H[i] = 1/(s[i]^2), can greatly improve convergence. IMPRTANT: you should set scale of your variables with MinLBFGSSetScale() call (before or after MinLBFGSSetPrecScale() call). Without knowledge of the scale of your variables scale-based preconditioner will be just unit matrix. INPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 13.10.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgssetprecscale(minlbfgsstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets scaling coefficients for LBFGS optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Scaling is also used by finite difference variant of the optimizer - step along I-th axis is equal to DiffStep*S[I]. In most optimizers (and in the LBFGS too) scaling is NOT a form of preconditioning. It just affects stopping conditions. You should set preconditioner by separate call to one of the MinLBFGSSetPrec...() functions. There is special preconditioning mode, however, which uses scaling coefficients to form diagonal preconditioning matrix. You can turn this mode on, if you want. But you should understand that scaling is not the same thing as preconditioning - these are two different, although related forms of tuning solver. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void minlbfgssetscale(minlbfgsstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0 (default), if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgssetstpmax(minlbfgsstate &state, const double stpmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinLBFGSOptimize(). -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minlbfgssetxrep(minlbfgsstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_grad(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) 
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
    grad[0] = 400*pow(x[0]+3,3);
    grad[1] = 4*pow(x[1]-3,3);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x,y) = 100*(x+3)^4+(y-3)^4
        //
        // using LBFGS method, with:
        // * initial point x=[0,0]
        // * unit scale being set for all variables (see minlbfgssetscale for more info)
        // * stopping criteria set to "terminate after short enough step"
        //
        // First, we create optimizer object and tune its properties.
        //
        // IMPORTANT: the  LBFGS  optimizer  supports  parallel parallel numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is  present  in  commercial  ALGLIB  editions greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on  minlbfgsoptimize() function for
        //            more information.
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.0000000001;
        ae_int_t maxits = 0;
        minlbfgsstate state;
        minlbfgscreate(1, x, state);
        minlbfgssetcond(state, epsg, epsf, epsx, maxits);
        minlbfgssetscale(state, s);

        //
        // Optimize and examine results.
        //
        minlbfgsreport rep;
        alglib::minlbfgsoptimize(state, function1_grad);
        minlbfgsresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,3]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_grad(const real_1d_array &x, double &func, real_1d_array &grad, void *ptr) 
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    // and its derivatives df/d0 and df/dx1
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
    grad[0] = 400*pow(x[0]+3,3);
    grad[1] = 4*pow(x[1]-3,3);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of f(x,y) = 100*(x+3)^4+(y-3)^4
        // using LBFGS method.
        //
        // Several advanced techniques are demonstrated:
        // * upper limit on step size
        // * restart from new point
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        double epsg = 0;
        double epsf = 0;
        double epsx = 0.0000000001;
        double stpmax = 0.1;
        ae_int_t maxits = 0;
        minlbfgsstate state;
        minlbfgsreport rep;

        // create and tune optimizer
        minlbfgscreate(1, x, state);
        minlbfgssetcond(state, epsg, epsf, epsx, maxits);
        minlbfgssetstpmax(state, stpmax);
        minlbfgssetscale(state, s);

        // Set up OptGuard integrity checker which catches errors
        // like nonsmooth targets or errors in the analytic gradient.
        //
        // OptGuard is essential at the early prototyping stages.
        //
        // NOTE: gradient verification needs 3*N additional function
        //       evaluations; DO NOT USE IT IN THE PRODUCTION CODE
        //       because it leads to unnecessary slowdown of your app.
        minlbfgsoptguardsmoothness(state);
        minlbfgsoptguardgradient(state, 0.001);

        // first run
        alglib::minlbfgsoptimize(state, function1_grad);
        minlbfgsresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,3]

        // second run - algorithm is restarted
        x = "[10,10]";
        minlbfgsrestartfrom(state, x);
        alglib::minlbfgsoptimize(state, function1_grad);
        minlbfgsresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,3]

        // check OptGuard integrity report. Why do we need it at all?
        // Well, try breaking the gradient by adding 1.0 to some
        // of its components - OptGuard should report it as error.
        // And it may also catch unintended errors too :)
        optguardreport ogrep;
        minlbfgsoptguardresults(state, ogrep);
        printf("%s\n", ogrep.badgradsuspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc0suspected ? "true" : "false"); // EXPECTED: false
        printf("%s\n", ogrep.nonc1suspected ? "true" : "false"); // EXPECTED: false
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void function1_func(const real_1d_array &x, double &func, void *ptr)
{
    //
    // this callback calculates f(x0,x1) = 100*(x0+3)^4 + (x1-3)^4
    //
    func = 100*pow(x[0]+3,4) + pow(x[1]-3,4);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x,y) = 100*(x+3)^4+(y-3)^4
        //
        // using numerical differentiation to calculate gradient.
        //
        // IMPORTANT: the  LBFGS  optimizer  supports  parallel parallel numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is  present  in  commercial  ALGLIB  editions greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on  minlbfgsoptimize() function for
        //            more information.
        //
        real_1d_array x = "[0,0]";
        double epsg = 0.0000000001;
        double epsf = 0;
        double epsx = 0;
        double diffstep = 1.0e-6;
        ae_int_t maxits = 0;
        minlbfgsstate state;
        minlbfgsreport rep;

        minlbfgscreatef(1, x, diffstep, state);
        minlbfgssetcond(state, epsg, epsf, epsx, maxits);
        alglib::minlbfgsoptimize(state, function1_func);
        minlbfgsresults(state, x, rep);

        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 4
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,3]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minlmreport
minlmstate
minlmcreatev
minlmcreatevj
minlmiteration
minlmoptguardgradient
minlmoptguardresults
minlmoptimize
minlmrequesttermination
minlmrestartfrom
minlmresults
minlmresultsbuf
minlmsetacctype
minlmsetbc
minlmsetcond
minlmsetlc
minlmsetnonmonotonicsteps
minlmsetnumdiff
minlmsetscale
minlmsetstpmax
minlmsetxrep
minlm_d_restarts Efficient restarts of LM optimizer
minlm_d_v Nonlinear least squares optimization using function vector only
minlm_d_vb Bound constrained nonlinear least squares optimization
minlm_d_vj Nonlinear least squares optimization using function vector and Jacobian
/************************************************************************* Optimization report, filled by MinLMResults() function FIELDS: * TerminationType, completetion code: * -8 optimizer detected NAN/INF values either in the function itself, or in its Jacobian * -5 inappropriate solver was used: * solver created with minlmcreatefgh() used on problem with general linear constraints (set with minlmsetlc() call). * -3 constraints are inconsistent * 2 relative step is no more than EpsX. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * 8 terminated by user who called MinLMRequestTermination(). X contains point which was "current accepted" when termination request was submitted. * F, objective value, SUM(f[i]^2) * IterationsCount, contains iterations count * NFunc, number of function calculations * NJac, number of Jacobi matrix calculations * NGrad, number of gradient calculations * NHess, number of Hessian calculations * NCholesky, number of Cholesky decomposition calculations *************************************************************************/
class minlmreport { public: minlmreport(); minlmreport(const minlmreport &rhs); minlmreport& operator=(const minlmreport &rhs); virtual ~minlmreport(); ae_int_t iterationscount; ae_int_t terminationtype; double f; ae_int_t nfunc; ae_int_t njac; ae_int_t ngrad; ae_int_t nhess; ae_int_t ncholesky; };
/************************************************************************* Levenberg-Marquardt optimizer. This structure should be created using one of the MinLMCreate???() functions. You should not access its fields directly; use ALGLIB functions to work with it. *************************************************************************/
class minlmstate { public: minlmstate(); minlmstate(const minlmstate &rhs); minlmstate& operator=(const minlmstate &rhs); virtual ~minlmstate(); };
/************************************************************************* IMPROVED LEVENBERG-MARQUARDT METHOD FOR NON-LINEAR LEAST SQUARES OPTIMIZATION DESCRIPTION: This function is used to find minimum of function which is represented as sum of squares: F(x) = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1]) using value of function vector f[] only. Finite differences are used to calculate Jacobian. REQUIREMENTS: This algorithm will request following information during its operation: * function vector f[] at given point X There are several overloaded versions of MinLMOptimize() function which correspond to different LM-like optimization algorithms provided by this unit. You should choose version which accepts fvec() callback. You can try to initialize MinLMState structure with VJ function and then use incorrect version of MinLMOptimize() (for example, version which works with general form function and does not accept function vector), but it will lead to exception being thrown after first attempt to calculate Jacobian. USAGE: 1. User initializes algorithm state with MinLMCreateV() call 2. User tunes solver parameters with MinLMSetCond(), MinLMSetStpMax() and other functions 3. User calls MinLMOptimize() function which takes algorithm state and callback functions. 4. User calls MinLMResults() to get solution 5. Optionally, user may call MinLMRestartFrom() to solve another problem with same N/M but another starting point and/or another function. MinLMRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - dimension, N>1 * if given, only leading N elements of X are used * if not given, automatically determined from size of X M - number of functions f[i] X - initial solution, array[0..N-1] DiffStep- differentiation step, >0. By default, symmetric 3-point formula which provides good accuracy is used. It can be changed to a faster but less precise 2-point one with minlmsetnumdiff() function. OUTPUT PARAMETERS: State - structure which stores algorithm state See also MinLMIteration, MinLMResults. NOTES: 1. you may tune stopping conditions with MinLMSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLMSetStpMax() function to bound algorithm's steps. -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/
void minlmcreatev(const ae_int_t n, const ae_int_t m, const real_1d_array &x, const double diffstep, minlmstate &state, const xparams _xparams = alglib::xdefault); void minlmcreatev(const ae_int_t m, const real_1d_array &x, const double diffstep, minlmstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* IMPROVED LEVENBERG-MARQUARDT METHOD FOR NON-LINEAR LEAST SQUARES OPTIMIZATION DESCRIPTION: This function is used to find minimum of function which is represented as sum of squares: F(x) = f[0]^2(x[0],...,x[n-1]) + ... + f[m-1]^2(x[0],...,x[n-1]) using value of function vector f[] and Jacobian of f[]. REQUIREMENTS: This algorithm will request following information during its operation: * function vector f[] at given point X * function vector f[] and Jacobian of f[] (simultaneously) at given point There are several overloaded versions of MinLMOptimize() function which correspond to different LM-like optimization algorithms provided by this unit. You should choose version which accepts fvec() and jac() callbacks. First one is used to calculate f[] at given point, second one calculates f[] and Jacobian df[i]/dx[j]. You can try to initialize MinLMState structure with VJ function and then use incorrect version of MinLMOptimize() (for example, version which works with general form function and does not provide Jacobian), but it will lead to exception being thrown after first attempt to calculate Jacobian. USAGE: 1. User initializes algorithm state with MinLMCreateVJ() call 2. User tunes solver parameters with MinLMSetCond(), MinLMSetStpMax() and other functions 3. User calls MinLMOptimize() function which takes algorithm state and callback functions. 4. User calls MinLMResults() to get solution 5. Optionally, user may call MinLMRestartFrom() to solve another problem with same N/M but another starting point and/or another function. MinLMRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - dimension, N>1 * if given, only leading N elements of X are used * if not given, automatically determined from size of X M - number of functions f[i] X - initial solution, array[0..N-1] OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. you may tune stopping conditions with MinLMSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use MinLMSetStpMax() function to bound algorithm's steps. -- ALGLIB -- Copyright 30.03.2009 by Bochkanov Sergey *************************************************************************/
void minlmcreatevj(const ae_int_t n, const ae_int_t m, const real_1d_array &x, minlmstate &state, const xparams _xparams = alglib::xdefault); void minlmcreatevj(const ae_int_t m, const real_1d_array &x, minlmstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool minlmiteration(minlmstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function activates/deactivates verification of the user-supplied analytic Jacobian. Upon activation of this option OptGuard integrity checker performs numerical differentiation of your target function vector at the initial point (note: future versions may also perform check at the final point) and compares numerical Jacobian with analytic one provided by you. If difference is too large, an error flag is set and optimization session continues. After optimization session is over, you can retrieve the report which stores both Jacobians, and specific components highlighted as suspicious by the OptGuard. The OptGuard report can be retrieved with minlmoptguardresults(). IMPORTANT: gradient check is a high-overhead option which will cost you about 3*N additional function evaluations. In many cases it may cost as much as the rest of the optimization session. YOU SHOULD NOT USE IT IN THE PRODUCTION CODE UNLESS YOU WANT TO CHECK DERIVATIVES PROVIDED BY SOME THIRD PARTY. NOTE: unlike previous incarnation of the gradient checking code, OptGuard does NOT interrupt optimization even if it discovers bad gradient. INPUT PARAMETERS: State - structure used to store algorithm state TestStep - verification step used for numerical differentiation: * TestStep=0 turns verification off * TestStep>0 activates verification You should carefully choose TestStep. Value which is too large (so large that function behavior is non- cubic at this scale) will lead to false alarms. Too short step will result in rounding errors dominating numerical derivative. You may use different step for different parameters by means of setting scale with minlmsetscale(). === EXPLANATION ========================================================== In order to verify gradient algorithm performs following steps: * two trial steps are made to X[i]-TestStep*S[i] and X[i]+TestStep*S[i], where X[i] is i-th component of the initial point and S[i] is a scale of i-th parameter * F(X) is evaluated at these trial points * we perform one more evaluation in the middle point of the interval * we build cubic model using function values and derivatives at trial points and we compare its prediction with actual value in the middle point -- ALGLIB -- Copyright 15.06.2014 by Bochkanov Sergey *************************************************************************/
void minlmoptguardgradient(minlmstate &state, const double teststep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Results of OptGuard integrity check, should be called after optimization session is over. OptGuard checks analytic Jacobian against reference value obtained by numerical differentiation with user-specified step. NOTE: other optimizers perform additional OptGuard checks for things like C0/C1-continuity violations. However, LM optimizer can check only for incorrect Jacobian. The reason is that unlike line search methods LM optimizer does not perform extensive evaluations along the line. Thus, we simply do not have enough data to catch C0/C1-violations. This check is activated with minlmoptguardgradient() function. Following flags are set when these errors are suspected: * rep.badgradsuspected, and additionally: * rep.badgradfidx for specific function (Jacobian row) suspected * rep.badgradvidx for specific variable (Jacobian column) suspected * rep.badgradxbase, a point where gradient/Jacobian is tested * rep.badgraduser, user-provided gradient/Jacobian * rep.badgradnum, reference gradient/Jacobian obtained via numerical differentiation INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: rep - OptGuard report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minlmoptguardresults(minlmstate &state, optguardreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state fvec - callback which calculates function vector fi[] at given point x jac - callback which calculates function vector fi[] and Jacobian jac at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL CALLBACK PARALLELISM The MINLM optimizer supports parallel parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates optimization with numerical differentiation of an expensive target functions. Callback parallelism is usually beneficial when computing a numerical gradient requires more than several milliseconds. In this case the job of computing individual gradient components can be split between multiple threads. Even inexpensive targets can benefit from parallelism, if you have many variables. If you solve a curve fitting problem, i.e. the function vector is actually the same function computed at different points of a data points space, it may be better to use an LSFIT curve fitting solver, which offers more fine-grained parallelism due to knowledge of the problem structure. In particular, it can accelerate both numerical differentiation and problems with user-supplied gradients. ALGLIB Reference Manual, 'Working with commercial version' section, tells how to activate callback parallelism for your programming language. -- ALGLIB -- Copyright 03.12.2023 by Bochkanov Sergey *************************************************************************/
void minlmoptimize(minlmstate &state, void (*fvec)(const real_1d_array &x, real_1d_array &fi, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void minlmoptimize(minlmstate &state, void (*fvec)(const real_1d_array &x, real_1d_array &fi, void *ptr), void (*jac)(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This subroutine submits request for termination of running optimizer. It should be called from user-supplied callback when user decides that it is time to "smoothly" terminate optimization process. As result, optimizer stops at point which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 08.10.2014 by Bochkanov Sergey *************************************************************************/
void minlmrequesttermination(minlmstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts LM algorithm from new point. All optimization parameters are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure used for reverse communication previously allocated with MinLMCreateXXX call. X - new starting point. -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
void minlmrestartfrom(minlmstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Levenberg-Marquardt algorithm results NOTE: if you activated OptGuard integrity checking functionality and want to get OptGuard report, it can be retrieved with the help of minlmoptguardresults() function. INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report; includes termination codes and additional information. Termination codes are listed below, see comments for this structure for more info. Termination code is stored in rep.terminationtype field: * -8 optimizer detected NAN/INF values either in the function itself, or in its Jacobian * -3 constraints are inconsistent * 2 relative step is no more than EpsX. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * 8 terminated by user who called minlmrequesttermination(). X contains point which was "current accepted" when termination request was submitted. rep.f contains SUM(f[i]^2) at X -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
void minlmresults(const minlmstate &state, real_1d_array &x, minlmreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* Levenberg-Marquardt algorithm results Buffered implementation of MinLMResults(), which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
void minlmresultsbuf(const minlmstate &state, real_1d_array &x, minlmreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to change acceleration settings You can choose between three acceleration strategies: * AccType=0, no acceleration. * AccType=1, secant updates are used to update quadratic model after each iteration. After fixed number of iterations (or after model breakdown) we recalculate quadratic model using analytic Jacobian or finite differences. Number of secant-based iterations depends on optimization settings: about 3 iterations - when we have analytic Jacobian, up to 2*N iterations - when we use finite differences to calculate Jacobian. AccType=1 is recommended when Jacobian calculation cost is prohibitively high (several Mx1 function vector calculations followed by several NxN Cholesky factorizations are faster than calculation of one M*N Jacobian). It should also be used when we have no Jacobian, because finite difference approximation takes too much time to compute. Table below list optimization protocols (XYZ protocol corresponds to MinLMCreateXYZ) and acceleration types they support (and use by default). ACCELERATION TYPES SUPPORTED BY OPTIMIZATION PROTOCOLS: protocol 0 1 comment V + + VJ + + FGH + DEFAULT VALUES: protocol 0 1 comment V x without acceleration it is so slooooooooow VJ x FGH x NOTE: this function should be called before optimization. Attempt to call it during algorithm iterations may result in unexpected behavior. NOTE: attempt to call this function with unsupported protocol/acceleration combination will result in exception being thrown. -- ALGLIB -- Copyright 14.10.2010 by Bochkanov Sergey *************************************************************************/
void minlmsetacctype(minlmstate &state, const ae_int_t acctype, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets boundary constraints for LM optimizer Boundary constraints are inactive by default (after initial creation). They are preserved until explicitly turned off with another SetBC() call. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF (latter is recommended because it will allow solver to use better algorithm). BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF (latter is recommended because it will allow solver to use better algorithm). NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: this solver has following useful properties: * bound constraints are always satisfied exactly * function is evaluated only INSIDE area specified by bound constraints or at its boundary -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void minlmsetbc(minlmstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping conditions for Levenberg-Marquardt optimization algorithm. INPUT PARAMETERS: State - structure which stores algorithm state EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - ste pvector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinLMSetScale() Recommended values: 1E-9 ... 1E-12. MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Only Levenberg-Marquardt iterations are counted (L-BFGS/CG iterations are NOT counted because their cost is very low compared to that of LM). Passing EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). NOTE: it is not recommended to set large EpsX (say, 0.001). Because LM is a second-order method, it performs very precise steps anyway. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minlmsetcond(minlmstate &state, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This function sets general linear constraints for LM optimizer Linear constraints are inactive by default (after initial creation). They are preserved until explicitly turned off with another minlmsetlc() call. INPUT PARAMETERS: State - structure stores algorithm state C - linear constraints, array[K,N+1]. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. CT - type of constraints, array[K]: * if CT[i]>0, then I-th constraint is C[i,*]*x >= C[i,n+1] * if CT[i]=0, then I-th constraint is C[i,*]*x = C[i,n+1] * if CT[i]<0, then I-th constraint is C[i,*]*x <= C[i,n+1] K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT IMPORTANT: if you have linear constraints, it is strongly recommended to set scale of variables with minlmsetscale(). QP solver which is used to calculate linearly constrained steps heavily relies on good scaling of input problems. IMPORTANT: solvers created with minlmcreatefgh() do not support linear constraints. NOTE: linear (non-bound) constraints are satisfied only approximately - there always exists some violation due to numerical errors and algorithmic limitations. NOTE: general linear constraints add significant overhead to solution process. Although solver performs roughly same amount of iterations (when compared with similar box-only constrained problem), each iteration now involves solution of linearly constrained QP subproblem, which requires ~3-5 times more Cholesky decompositions. Thus, if you can reformulate your problem in such way this it has only box constraints, it may be beneficial to do so. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void minlmsetlc(minlmstate &state, const real_2d_array &c, const integer_1d_array &ct, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minlmsetlc(minlmstate &state, const real_2d_array &c, const integer_1d_array &ct, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to activate/deactivate nonmonotonic steps. Such steps may improve convergence on noisy problems or ones with minor smoothness defects. In its standard mode, LM solver compares value at the trial point f[1] with the value at the current point f[0]. Only steps that decrease f() are accepted. When the nonmonotonic mode is activated, f[1] is compared with maximum over several previous locations: max(f[0],f[-1],...,f[-CNT]). We still accept only steps that decrease f(), however our reference value has changed. The net results is that f[1]>f[0] are now allowed. Nonmonotonic steps can help to handle minor defects in the objective (e.g. small noise, discontinuous jumps or nonsmoothness). However, it is important that the overall shape of the problem is still smooth. It may also help to minimize perfectly smooth targets with complex geometries by allowing to jump through curved valleys. However, sometimes nonmonotonic steps degrade convergence by allowing an optimizer to wander too far away from the solution, so this feature should be used only after careful testing. INPUT PARAMETERS: State - structure stores algorithm state Cnt - nonmonotonic memory length, Cnt>=0: * 0 for traditional monotonic steps * 2..3 is recommended for the nonmonotonic optimization -- ALGLIB -- Copyright 07.04.2024 by Bochkanov Sergey *************************************************************************/
void minlmsetnonmonotonicsteps(minlmstate &state, const ae_int_t cnt, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets specific finite difference formula to be used for numerical differentiation. It works only for optimizers created with minlmcreatev() function; in other cases it has no effect. INPUT PARAMETERS: State - structure previously allocated with MinLMCreateV() call. FormulaType - formula type: * 3 for a 3-point formula, which is also known as a symmetric difference quotient (the formula actually uses only two function values per variable: at x+h and x-h). A good choice for medium-accuracy setups, a default option. * 2 for a forward (or backward, depending on variable bounds) finite difference (f(x+h)-f(x))/h. This formula has the lowest accuracy. However, it is 4x faster than the 5-point formula and 2x faster than the 3-point one because, in addition to the central value f(x), it needs only one additional function evaluation per variable. -- ALGLIB -- Copyright 03.12.2024 by Bochkanov Sergey *************************************************************************/
void minlmsetnumdiff(minlmstate &state, const ae_int_t formulatype, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets scaling coefficients for LM optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Generally, scale is NOT considered to be a form of preconditioner. But LM optimizer is unique in that it uses scaling matrix both in the stopping condition tests and as Marquardt damping factor. Proper scaling is very important for the algorithm performance. It is less important for the quality of results, but still has some influence (it is easier to converge when variables are properly scaled, so premature stopping is possible when very badly scalled variables are combined with relaxed stopping conditions). INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void minlmsetscale(minlmstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. NOTE: non-zero StpMax leads to moderate performance degradation because intermediate step of preconditioned L-BFGS optimization is incompatible with limits on step size. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minlmsetstpmax(minlmstate &state, const double stpmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinLMOptimize(). Both Levenberg-Marquardt and internal L-BFGS iterations are reported. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minlmsetxrep(minlmstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  function1_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = 10*pow(x[0]+3,2);
    fi[1] = pow(x[1]-3,2);
}
void  function2_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    // f0(x0,x1) = x0^2+1
    // f1(x0,x1) = x1-1
    //
    fi[0] = x[0]*x[0]+1;
    fi[1] = x[1]-1;
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of F(x0,x1) = f0^2+f1^2, where 
        //
        //     f0(x0,x1) = 10*(x0+3)^2
        //     f1(x0,x1) = (x1-3)^2
        //
        // using several starting points and efficient restarts.
        //
        real_1d_array x;
        double epsx = 0.0000000001;
        ae_int_t maxits = 0;
        minlmstate state;
        minlmreport rep;

        //
        // create optimizer using minlmcreatev()
        //
        x = "[10,10]";
        minlmcreatev(2, x, 0.0001, state);
        minlmsetcond(state, epsx, maxits);
        alglib::minlmoptimize(state, function1_fvec);
        minlmresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,+3]

        //
        // restart optimizer using minlmrestartfrom()
        //
        // we can use different starting point, different function,
        // different stopping conditions, but the problem size
        // must remain unchanged.
        //
        x = "[4,4]";
        minlmrestartfrom(state, x);
        alglib::minlmoptimize(state, function2_fvec);
        minlmresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [0,1]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  function1_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = 10*pow(x[0]+3,2);
    fi[1] = pow(x[1]-3,2);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of F(x0,x1) = f0^2+f1^2, where 
        //
        //     f0(x0,x1) = 10*(x0+3)^2
        //     f1(x0,x1) = (x1-3)^2
        //
        // using "V" mode of the Levenberg-Marquardt optimizer (function values only,
        // no Jacobian information). The optimization algorithm uses function vector
        //
        //     f[] = {f1,f2}
        //
        // No other information (Jacobian, gradient, etc.) is needed.
        //
        // IMPORTANT: the  MINLM  optimizer  supports  parallel parallel numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is present  in  commercial  ALGLIB  editions, greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            If  you  solve  a  curve fitting problem, i.e. the  function
        //            vector is actually the same function computed  at  different
        //            points of a data points space, then it may be better to  use
        //            an LSFIT curve fitting solver, which offers more fine-grained
        //            parallelism due to knowledge of the  problem  structure.  In
        //            particular, it can accelerate both numerical differentiation
        //            and problems with user-supplied gradients.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on   minlmoptimize()  function  for
        //            more information.
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        double epsx = 0.0000000001;
        ae_int_t maxits = 0;
        minlmstate state;
        minlmreport rep;

        //
        // Create optimizer, tell it to:
        // * use numerical differentiation with step equal to 0.0001
        // * use unit scale for all variables (s is a unit vector)
        // * stop after short enough step (less than epsx)
        //
        minlmcreatev(2, x, 0.0001, state);
        minlmsetcond(state, epsx, maxits);
        minlmsetscale(state, s);

        //
        // A new feature: nonmonotonic steps!
        //
        // In theory, LM solver should be used only for smooth and continuous objectives. In practice,
        // real life objectives are often results of some long numerical simulation and have defects
        // like small discontinuous jumps, small noise or minor nonsmoothness. Such defects often
        // prevent optimization progress because an uphill step may be required to move past the
        // defect.
        //
        // Nonmonotonic steps allow to tolerate a minor and temporary increase in the objective,
        // allowing progress beyond an obstacle. This feature is also essential for ill-conditioned
        // targets - it allows the solver to jump through a curved valley instead of navigating along
        // its bottom.
        //
        // However, sometimes nonmonotonic steps degrade convergence by  allowing  an optimizer to
        // wander too far away from the solution, so this feature should be used only after careful
        // testing.
        //
        // The code below sets the nonmonotonic memory length to 0 (which means traditional monotonic
        // optimization). If you want to try a nonmonotonic optimization, use 2 or 3 as a recommended
        // memory length.
        //
        minlmsetnonmonotonicsteps(state, 0);

        //
        // Optimize
        //
        alglib::minlmoptimize(state, function1_fvec);

        //
        // Test optimization results
        //
        // NOTE: because we use numerical differentiation, we do not
        //       verify Jacobian correctness - it is always "correct".
        //       However, if you switch to analytic gradient, consider
        //       checking it with OptGuard (see other examples).
        //
        minlmresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,+3]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  function1_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = 10*pow(x[0]+3,2);
    fi[1] = pow(x[1]-3,2);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of F(x0,x1) = f0^2+f1^2, where 
        //
        //     f0(x0,x1) = 10*(x0+3)^2
        //     f1(x0,x1) = (x1-3)^2
        //
        // with box constraints
        //
        //     -1 <= x0 <= +1
        //     -1 <= x1 <= +1
        //
        // using "V" mode of the Levenberg-Marquardt optimizer.  The  optimization
        // algorithm uses function  vector  f[] = {f1,f2}.  No  other  information
        // (Jacobian, gradient, etc.) is needed.
        //
        // IMPORTANT: the  MINLM  optimizer  supports  parallel parallel numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is present  in  commercial  ALGLIB  editions, greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            If  you  solve  a  curve fitting problem, i.e. the  function
        //            vector is actually the same function computed  at  different
        //            points of a data points space, then it may be better to  use
        //            an LSFIT curve fitting solver, which offers more fine-grained
        //            parallelism due to knowledge of the  problem  structure.  In
        //            particular, it can accelerate both numerical differentiation
        //            and problems with user-supplied gradients.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on   minlmoptimize()  function  for
        //            more information.
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[-1,-1]";
        real_1d_array bndu = "[+1,+1]";
        double epsx = 0.0000000001;
        ae_int_t maxits = 0;
        minlmstate state;

        //
        // Create optimizer, tell it to:
        // * use numerical differentiation with step equal to 1.0
        // * use unit scale for all variables (s is a unit vector)
        // * stop after short enough step (less than epsx)
        // * set box constraints
        //
        minlmcreatev(2, x, 0.0001, state);
        minlmsetbc(state, bndl, bndu);
        minlmsetcond(state, epsx, maxits);
        minlmsetscale(state, s);

        //
        // A new feature: nonmonotonic steps!
        //
        // In theory, LM solver should be used only for smooth and continuous objectives. In practice,
        // real life objectives are often results of some long numerical simulation and have defects
        // like small discontinuous jumps, small noise or minor nonsmoothness. Such defects often
        // prevent optimization progress because an uphill step may be required to move past the
        // defect.
        //
        // Nonmonotonic steps allow to tolerate a minor and temporary increase in the objective,
        // allowing progress beyond an obstacle. This feature is also essential for ill-conditioned
        // targets - it allows the solver to jump through a curved valley instead of navigating along
        // its bottom.
        //
        // However, sometimes nonmonotonic steps degrade convergence by  allowing  an optimizer to
        // wander too far away from the solution, so this feature should be used only after careful
        // testing.
        //
        // The code below sets the nonmonotonic memory length to 0 (which means traditional monotonic
        // optimization). If you want to try a nonmonotonic optimization, use 2 or 3 as a recommended
        // memory length.
        //
        minlmsetnonmonotonicsteps(state, 0);

        //
        // Optimize
        //
        alglib::minlmoptimize(state, function1_fvec);

        //
        // Test optimization results
        //
        // NOTE: because we use numerical differentiation, we do not
        //       verify Jacobian correctness - it is always "correct".
        //       However, if you switch to analytic gradient, consider
        //       checking it with OptGuard (see other examples).
        //
        minlmreport rep;
        minlmresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-1,+1]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  function1_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = 10*pow(x[0]+3,2);
    fi[1] = pow(x[1]-3,2);
}
void  function1_jac(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    // and Jacobian matrix J = [dfi/dxj]
    //
    fi[0] = 10*pow(x[0]+3,2);
    fi[1] = pow(x[1]-3,2);
    jac[0][0] = 20*(x[0]+3);
    jac[0][1] = 0;
    jac[1][0] = 0;
    jac[1][1] = 2*(x[1]-3);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of F(x0,x1) = f0^2+f1^2, where 
        //
        //     f0(x0,x1) = 10*(x0+3)^2
        //     f1(x0,x1) = (x1-3)^2
        //
        // using "VJ" mode of the Levenberg-Marquardt optimizer.  The optimization
        // algorithm uses the  function  vector  f[] = {f1,f2}  and  the  Jacobian
        // matrix J = {dfi/dxj}, both of them provided by user.
        //
        // IMPORTANT: the   MINLM   optimizer  supports     parallel     numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is present  in  commercial  ALGLIB  editions, greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            If  you  solve  a  curve fitting problem, i.e. the  function
        //            vector is actually the same function computed  at  different
        //            points of a data points space, then it may be better to  use
        //            an LSFIT curve fitting solver, which offers more fine-grained
        //            parallelism due to knowledge of the  problem  structure.  In
        //            particular, it can accelerate both numerical differentiation
        //            and problems with user-supplied gradients.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on   minlmoptimize()  function  for
        //            more information.
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        double epsx = 0.0000000001;
        ae_int_t maxits = 0;
        minlmstate state;

        //
        // Create optimizer, tell it to:
        // * use analytic gradient provided by user
        // * use unit scale for all variables (s is a unit vector)
        // * stop after short enough step (less than epsx)
        //
        minlmcreatevj(2, x, state);
        minlmsetcond(state, epsx, maxits);
        minlmsetscale(state, s);

        //
        // A new feature: nonmonotonic steps!
        //
        // In theory, LM solver should be used only for smooth and continuous objectives. In practice,
        // real life objectives are often results of some long numerical simulation and have defects
        // like small discontinuous jumps, small noise or minor nonsmoothness. Such defects often
        // prevent optimization progress because an uphill step may be required to move past the
        // defect.
        //
        // Nonmonotonic steps allow to tolerate a minor and temporary increase in the objective,
        // allowing progress beyond an obstacle. This feature is also essential for ill-conditioned
        // targets - it allows the solver to jump through a curved valley instead of navigating along
        // its bottom.
        //
        // However, sometimes nonmonotonic steps degrade convergence by  allowing  an optimizer to
        // wander too far away from the solution, so this feature should be used only after careful
        // testing.
        //
        // The code below sets the nonmonotonic memory length to 0 (which means traditional monotonic
        // optimization). If you want to try a nonmonotonic optimization, use 2 or 3 as a recommended
        // memory length.
        //
        minlmsetnonmonotonicsteps(state, 0);

        //
        // Optimize
        //
        alglib::minlmoptimize(state, function1_fvec, function1_jac);

        //
        // Test optimization results
        //
        minlmreport rep;
        minlmresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-3,+3]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minlpreport
minlpstate
minlpaddlc2
minlpaddlc2dense
minlpcreate
minlpoptimize
minlpresults
minlpresultsbuf
minlpsetalgodss
minlpsetalgoipm
minlpsetbc
minlpsetbcall
minlpsetbci
minlpsetcost
minlpsetlc
minlpsetlc2
minlpsetlc2dense
minlpsetscale
minlp_basic Basic linear programming example
/************************************************************************* This structure stores optimization report: * f target function value * lagbc Lagrange coefficients for box constraints * laglc Lagrange coefficients for linear constraints * y dual variables * stats array[N+M], statuses of box (N) and linear (M) constraints. This array is filled only by DSS algorithm because IPM always stops at INTERIOR point: * stats[i]>0 => constraint at upper bound (also used for free non-basic variables set to zero) * stats[i]<0 => constraint at lower bound * stats[i]=0 => constraint is inactive, basic variable * primalerror primal feasibility error * dualerror dual feasibility error * slackerror complementary slackness error * iterationscount iteration count * terminationtype completion code (see below) COMPLETION CODES Completion codes: * -4 LP problem is primal unbounded (dual infeasible) * -3 LP problem is primal infeasible (dual unbounded) * 1..4 successful completion * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. LAGRANGE COEFFICIENTS Positive Lagrange coefficient means that constraint is at its upper bound. Negative coefficient means that constraint is at its lower bound. It is expected that at the solution the dual feasibility condition holds: C + SUM(Ei*LagBC[i],i=0..n-1) + SUM(Ai*LagLC[i],i=0..m-1) ~ 0 where * C is a cost vector (linear term) * Ei is a vector with 1.0 at position I and 0 in other positions * Ai is an I-th row of linear constraint matrix *************************************************************************/
class minlpreport { public: minlpreport(); minlpreport(const minlpreport &rhs); minlpreport& operator=(const minlpreport &rhs); virtual ~minlpreport(); double f; real_1d_array lagbc; real_1d_array laglc; real_1d_array y; integer_1d_array stats; double primalerror; double dualerror; double slackerror; ae_int_t iterationscount; ae_int_t terminationtype; };
/************************************************************************* This object stores linear solver state. You should use functions provided by MinLP subpackage to work with this object *************************************************************************/
class minlpstate { public: minlpstate(); minlpstate(const minlpstate &rhs); minlpstate& operator=(const minlpstate &rhs); virtual ~minlpstate(); };
/************************************************************************* This function appends two-sided linear constraint AL <= A*x <= AU to the list of currently present constraints. Constraint is passed in compressed format: as list of non-zero entries of coefficient vector A. Such approach is more efficient than dense storage for highly sparse constraint vectors. INPUT PARAMETERS: State - structure previously allocated with minlpcreate() call. IdxA - array[NNZ], indexes of non-zero elements of A: * can be unsorted * can include duplicate indexes (corresponding entries of ValA[] will be summed) ValA - array[NNZ], values of non-zero elements of A NNZ - number of non-zero coefficients in A AL, AU - lower and upper bounds; * AL=AU => equality constraint A*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint A*x<=AU * AU=+INF => one-sided constraint AL<=A*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpaddlc2(minlpstate &state, const integer_1d_array &idxa, const real_1d_array &vala, const ae_int_t nnz, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends two-sided linear constraint AL <= A*x <= AU to the list of currently present constraints. This version accepts dense constraint vector as input, but sparsifies it for internal storage and processing. Thus, time to add one constraint in is O(N) - we have to scan entire array of length N. Sparse version of this function is order of magnitude faster for constraints with just a few nonzeros per row. INPUT PARAMETERS: State - structure previously allocated with minlpcreate() call. A - linear constraint coefficient, array[N], right side is NOT included. AL, AU - lower and upper bounds; * AL=AU => equality constraint Ai*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint Ai*x<=AU * AU=+INF => one-sided constraint AL<=Ai*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpaddlc2dense(minlpstate &state, const real_1d_array &a, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* LINEAR PROGRAMMING The subroutine creates LP solver. After initial creation it contains default optimization problem with zero cost vector and all variables being fixed to zero values and no constraints. In order to actually solve something you should: * set cost vector with minlpsetcost() * set variable bounds with minlpsetbc() or minlpsetbcall() * specify constraint matrix with one of the following functions: [*] minlpsetlc() for dense one-sided constraints [*] minlpsetlc2dense() for dense two-sided constraints [*] minlpsetlc2() for sparse two-sided constraints [*] minlpaddlc2dense() to add one dense row to constraint matrix [*] minlpaddlc2() to add one row to constraint matrix (compressed format) * call minlpoptimize() to run the solver and minlpresults() to get the solution vector and additional information. By default, LP solver uses best algorithm available. As of ALGLIB 3.17, sparse interior point (barrier) solver is used. Future releases of ALGLIB may introduce other solvers. User may choose specific LP algorithm by calling: * minlpsetalgodss() for revised dual simplex method with DSE pricing and bounds flipping ratio test (aka long dual step). Large-scale sparse LU solverwith Forest-Tomlin update is used internally as linear algebra driver. * minlpsetalgoipm() for sparse interior point method INPUT PARAMETERS: N - problem size OUTPUT PARAMETERS: State - optimizer in the default state -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpcreate(const ae_int_t n, minlpstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function solves LP problem. INPUT PARAMETERS: State - algorithm state You should use minlpresults() function to access results after calls to this function. -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey. *************************************************************************/
void minlpoptimize(minlpstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* LP solver results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[N], solution (on failure: last trial point) Rep - optimization report. You should check Rep.TerminationType, which contains completion code, and you may check another fields which contain another information about algorithm functioning. Failure codes returned by algorithm are: * -4 LP problem is primal unbounded (dual infeasible) * -3 LP problem is primal infeasible (dual unbounded) * -2 IPM solver detected that problem is either infeasible or unbounded Success codes: * 1..4 successful completion * 5 MaxIts steps was taken -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minlpresults(const minlpstate &state, real_1d_array &x, minlpreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* LP results Buffered implementation of MinLPResults() which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minlpresultsbuf(const minlpstate &state, real_1d_array &x, minlpreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets LP algorithm to revised dual simplex method. ALGLIB implementation of dual simplex method supports advanced performance and stability improvements like DSE pricing , bounds flipping ratio test (aka long dual step), Forest-Tomlin update, shifting. INPUT PARAMETERS: State - optimizer Eps - stopping condition, Eps>=0: * should be small number about 1E-6 or 1E-7. * zero value means that solver automatically selects good value (can be different in different ALGLIB versions) * default value is zero Algorithm stops when relative error is less than Eps. ===== TRACING DSS SOLVER ================================================= DSS solver supports advanced tracing capabilities. You can trace algorithm output by specifying following trace symbols (case-insensitive) by means of trace_file() call: * 'DSS' - for basic trace of algorithm steps and decisions. Only short scalars (function values and deltas) are printed. N-dimensional quantities like search directions are NOT printed. * 'DSS.DETAILED'- for output of points being visited and search directions This symbol also implicitly defines 'DSS'. You can control output format by additionally specifying: * nothing to output in 6-digit exponential format * 'PREC.E15' to output in 15-digit exponential format * 'PREC.F6' to output in 6-digit fixed-point format By default trace is disabled and adds no overhead to the optimization process. However, specifying any of the symbols adds some formatting and output-related overhead. You may specify multiple symbols by separating them with commas: > > alglib::trace_file("DSS,PREC.F6", "path/to/trace.log") > -- ALGLIB -- Copyright 08.11.2020 by Bochkanov Sergey *************************************************************************/
void minlpsetalgodss(minlpstate &state, const double eps, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets LP algorithm to sparse interior point method. ALGORITHM INFORMATION: * this algorithm is our implementation of interior point method as formulated by R.J.Vanderbei, with minor modifications to the algorithm (damped Newton directions are extensively used) * like all interior point methods, this algorithm tends to converge in roughly same number of iterations (between 15 and 50) independently from the problem dimensionality INPUT PARAMETERS: State - optimizer Eps - stopping condition, Eps>=0: * should be small number about 1E-6 or 1E-8. * zero value means that solver automatically selects good value (can be different in different ALGLIB versions) * default value is zero Algorithm stops when primal error AND dual error AND duality gap are less than Eps. ===== TRACING IPM SOLVER ================================================= IPM solver supports advanced tracing capabilities. You can trace algorithm output by specifying following trace symbols (case-insensitive) by means of trace_file() call: * 'IPM' - for basic trace of algorithm steps and decisions. Only short scalars (function values and deltas) are printed. N-dimensional quantities like search directions are NOT printed. * 'IPM.DETAILED'- for output of points being visited and search directions This symbol also implicitly defines 'IPM'. You can control output format by additionally specifying: * nothing to output in 6-digit exponential format * 'PREC.E15' to output in 15-digit exponential format * 'PREC.F6' to output in 6-digit fixed-point format By default trace is disabled and adds no overhead to the optimization process. However, specifying any of the symbols adds some formatting and output-related overhead. You may specify multiple symbols by separating them with commas: > > alglib::trace_file("IPM,PREC.F6", "path/to/trace.log") > -- ALGLIB -- Copyright 08.11.2020 by Bochkanov Sergey *************************************************************************/
void minlpsetalgoipm(minlpstate &state, const double eps, const xparams _xparams = alglib::xdefault); void minlpsetalgoipm(minlpstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets box constraints for LP solver (all variables at once, different constraints for different variables). The default state of constraints is to have all variables fixed at zero. You have to overwrite it by your own constraint vector. Constraint status is preserved until constraints are explicitly overwritten with another minlpsetbc() call, overwritten with minlpsetbcall(), or partially overwritten with minlmsetbci() call. Following types of constraints are supported: DESCRIPTION CONSTRAINT HOW TO SPECIFY fixed variable x[i]=Bnd[i] BndL[i]=BndU[i] lower bound BndL[i]<=x[i] BndU[i]=+INF upper bound x[i]<=BndU[i] BndL[i]=-INF range BndL[i]<=x[i]<=BndU[i] ... free variable - BndL[I]=-INF, BndU[I]+INF INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. BndU - upper bounds, array[N]. NOTE: infinite values can be specified by means of Double.PositiveInfinity and Double.NegativeInfinity (in C#) and alglib::fp_posinf and alglib::fp_neginf (in C++). NOTE: you may replace infinities by very small/very large values, but it is not recommended because large numbers may introduce large numerical errors in the algorithm. NOTE: if constraints for all variables are same you may use minlpsetbcall() which allows to specify constraints without using arrays. NOTE: BndL>BndU will result in LP problem being recognized as infeasible. -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpsetbc(minlpstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets box constraints for LP solver (all variables at once, same constraints for all variables) The default state of constraints is to have all variables fixed at zero. You have to overwrite it by your own constraint vector. Constraint status is preserved until constraints are explicitly overwritten with another minlpsetbc() call or partially overwritten with minlpsetbcall(). Following types of constraints are supported: DESCRIPTION CONSTRAINT HOW TO SPECIFY fixed variable x[i]=Bnd[i] BndL[i]=BndU[i] lower bound BndL[i]<=x[i] BndU[i]=+INF upper bound x[i]<=BndU[i] BndL[i]=-INF range BndL[i]<=x[i]<=BndU[i] ... free variable - BndL[I]=-INF, BndU[I]+INF INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bound, same for all variables BndU - upper bound, same for all variables NOTE: infinite values can be specified by means of Double.PositiveInfinity and Double.NegativeInfinity (in C#) and alglib::fp_posinf and alglib::fp_neginf (in C++). NOTE: you may replace infinities by very small/very large values, but it is not recommended because large numbers may introduce large numerical errors in the algorithm. NOTE: minlpsetbc() can be used to specify different constraints for different variables. NOTE: BndL>BndU will result in LP problem being recognized as infeasible. -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpsetbcall(minlpstate &state, const double bndl, const double bndu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets box constraints for I-th variable (other variables are not modified). The default state of constraints is to have all variables fixed at zero. You have to overwrite it by your own constraint vector. Following types of constraints are supported: DESCRIPTION CONSTRAINT HOW TO SPECIFY fixed variable x[i]=Bnd[i] BndL[i]=BndU[i] lower bound BndL[i]<=x[i] BndU[i]=+INF upper bound x[i]<=BndU[i] BndL[i]=-INF range BndL[i]<=x[i]<=BndU[i] ... free variable - BndL[I]=-INF, BndU[I]+INF INPUT PARAMETERS: State - structure stores algorithm state I - variable index, in [0,N) BndL - lower bound for I-th variable BndU - upper bound for I-th variable NOTE: infinite values can be specified by means of Double.PositiveInfinity and Double.NegativeInfinity (in C#) and alglib::fp_posinf and alglib::fp_neginf (in C++). NOTE: you may replace infinities by very small/very large values, but it is not recommended because large numbers may introduce large numerical errors in the algorithm. NOTE: minlpsetbc() can be used to specify different constraints for different variables. NOTE: BndL>BndU will result in LP problem being recognized as infeasible. -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpsetbci(minlpstate &state, const ae_int_t i, const double bndl, const double bndu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets cost term for LP solver. By default, cost term is zero. INPUT PARAMETERS: State - structure which stores algorithm state C - cost term, array[N]. -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpsetcost(minlpstate &state, const real_1d_array &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets one-sided linear constraints A*x ~ AU, where "~" can be a mix of "<=", "=" and ">=". IMPORTANT: this function is provided here for compatibility with the rest of ALGLIB optimizers which accept constraints in format like this one. Many real-life problems feature two-sided constraints like a0 <= a*x <= a1. It is really inefficient to add them as a pair of one-sided constraints. Use minlpsetlc2dense(), minlpsetlc2(), minlpaddlc2() (or its sparse version) wherever possible. INPUT PARAMETERS: State - structure previously allocated with minlpcreate() call. A - linear constraints, array[K,N+1]. Each row of A represents one constraint, with first N elements being linear coefficients, and last element being right side. CT - constraint types, array[K]: * if CT[i]>0, then I-th constraint is A[i,*]*x >= A[i,n] * if CT[i]=0, then I-th constraint is A[i,*]*x = A[i,n] * if CT[i]<0, then I-th constraint is A[i,*]*x <= A[i,n] K - number of equality/inequality constraints, K>=0; if not given, inferred from sizes of A and CT. -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpsetlc(minlpstate &state, const real_2d_array &a, const integer_1d_array &ct, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minlpsetlc(minlpstate &state, const real_2d_array &a, const integer_1d_array &ct, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with sparse constraining matrix A. Recommended for large-scale problems. This function overwrites linear (non-box) constraints set by previous calls (if such calls were made). INPUT PARAMETERS: State - structure previously allocated with minlpcreate() call. A - sparse matrix with size [K,N] (exactly!). Each row of A represents one general linear constraint. A can be stored in any sparse storage format. AL, AU - lower and upper bounds, array[K]; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0. If K=0 is specified, A, AL, AU are ignored. -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpsetlc2(minlpstate &state, const sparsematrix &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t k, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU. This version accepts dense matrix as input; internally LP solver uses sparse storage anyway (most LP problems are sparse), but for your convenience it may accept dense inputs. This function overwrites linear constraints set by previous calls (if such calls were made). We recommend you to use sparse version of this function unless you solve small-scale LP problem (less than few hundreds of variables). NOTE: there also exist several versions of this function: * one-sided dense version which accepts constraints in the same format as one used by QP and NLP solvers * two-sided sparse version which accepts sparse matrix * two-sided dense version which allows you to add constraints row by row * two-sided sparse version which allows you to add constraints row by row INPUT PARAMETERS: State - structure previously allocated with minlpcreate() call. A - linear constraints, array[K,N]. Each row of A represents one constraint. One-sided inequality constraints, two- sided inequality constraints, equality constraints are supported (see below) AL, AU - lower and upper bounds, array[K]; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0; if not given, inferred from sizes of A, AL, AU. -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpsetlc2dense(minlpstate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minlpsetlc2dense(minlpstate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets scaling coefficients. ALGLIB optimizers use scaling matrices to test stopping conditions and as preconditioner. Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minlpsetscale(minlpstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates how to minimize
        //
        //     F(x0,x1) = -0.1*x0 - x1
        //
        // subject to box constraints
        //
        //     -1 <= x0,x1 <= +1 
        //
        // and general linear constraints
        //
        //     x0 - x1 >= -1
        //     x0 + x1 <=  1
        //
        // We use dual simplex solver provided by ALGLIB for this task. Box
        // constraints are specified by means of constraint vectors bndl and
        // bndu (we have bndl<=x<=bndu). General linear constraints are
        // specified as AL<=A*x<=AU, with AL/AU being 2x1 vectors and A being
        // 2x2 matrix.
        //
        // NOTE: some/all components of AL/AU can be +-INF, same applies to
        //       bndl/bndu. You can also have AL[I]=AU[i] (as well as
        //       BndL[i]=BndU[i]).
        //
        real_2d_array a = "[[1,-1],[1,+1]]";
        real_1d_array al = "[-1,-inf]";
        real_1d_array au = "[+inf,+1]";
        real_1d_array c = "[-0.1,-1]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[-1,-1]";
        real_1d_array bndu = "[+1,+1]";
        real_1d_array x;
        minlpstate state;
        minlpreport rep;

        minlpcreate(2, state);

        //
        // Set cost vector, box constraints, general linear constraints.
        //
        // Box constraints can be set in one call to minlpsetbc() or minlpsetbcall()
        // (latter sets same constraints for all variables and accepts two scalars
        // instead of two vectors).
        //
        // General linear constraints can be specified in several ways:
        // * minlpsetlc2dense() - accepts dense 2D array as input; sometimes this
        //   approach is more convenient, although less memory-efficient.
        // * minlpsetlc2() - accepts sparse matrix as input
        // * minlpaddlc2dense() - appends one row to the current set of constraints;
        //   row being appended is specified as dense vector
        // * minlpaddlc2() - appends one row to the current set of constraints;
        //   row being appended is specified as sparse set of elements
        // Independently from specific function being used, LP solver uses sparse
        // storage format for internal representation of constraints.
        //
        minlpsetcost(state, c);
        minlpsetbc(state, bndl, bndu);
        minlpsetlc2dense(state, a, al, au, 2);

        //
        // Set scale of the parameters.
        //
        // It is strongly recommended that you set scale of your variables.
        // Knowing their scales is essential for evaluation of stopping criteria
        // and for preconditioning of the algorithm steps.
        // You can find more information on scaling at http://www.alglib.net/optimization/scaling.php
        //
        minlpsetscale(state, s);

        //
        // Solve with the sparse IPM.
        //
        // Commercial ALGLIB can parallelize sparse Cholesky factorization which is the
        // most time-consuming part of the algorithm. See the ALGLIB Reference Manual for
        // more information on how to activate parallelism support.
        //
        minlpsetalgoipm(state);
        minlpoptimize(state);
        minlpresults(state, x, rep);
        printf("%s\n", x.tostring(3).c_str()); // EXPECTED: [0,1]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minmoreport
minmostate
minmoaddlc2
minmoaddlc2dense
minmoaddlc2sparsefromdense
minmocreate
minmocreatef
minmoiteration
minmooptimize
minmorequesttermination
minmorestartfrom
minmoresults
minmosetalgonbi
minmosetbc
minmosetcond
minmosetlc2
minmosetlc2dense
minmosetlc2mixed
minmosetnlc2
minmosetscale
minmosetxrep
minmo_biobjective Unconstrained biobjective optimization
minmo_biobjective_constr Nonlinearly constrained biobjective optimization
/************************************************************************* These fields store optimization report: * inneriterationscount total number of inner iterations * outeriterationscount number of internal optimization sessions performed * nfev number of gradient evaluations * terminationtype termination type (see below) Scaled constraint violations (maximum over all Pareto points) are reported: * bcerr maximum violation of the box constraints * bcidx index of the most violated box constraint (or -1, if all box constraints are satisfied or there are no box constraint) * lcerr maximum violation of the linear constraints, computed as maximum scaled distance between final point and constraint boundary. * lcidx index of the most violated linear constraint (or -1, if all constraints are satisfied or there are no general linear constraints) * nlcerr maximum violation of the nonlinear constraints * nlcidx index of the most violated nonlinear constraint (or -1, if all constraints are satisfied or there are no nonlinear constraints) Violations of the box constraints are scaled on per-component basis according to the scale vector s[] as specified by the minmosetscale(). Violations of the general linear constraints are also computed using user-supplied variable scaling. Violations of the nonlinear constraints are computed "as is" TERMINATION CODES TerminationType field contains completion code, which can be either: === FAILURE CODE === -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signaled. -3 box constraints are infeasible. Note: infeasibility of non-box constraints does NOT trigger emergency completion; you have to examine bcerr/lcerr/nlcerr to detect possibly inconsistent constraints. === SUCCESS CODE === 2 relative step is no more than EpsX. 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. NOTE: The solver internally performs many optimization sessions: one for each Pareto point, and some amount of preparatory optimizations. Different optimization sessions may return different completion codes. If at least one of internal optimizations failed, its failure code is returned. If none of them failed, the most frequent code is returned. Other fields of this structure are not documented and should not be used! *************************************************************************/
class minmoreport { public: minmoreport(); minmoreport(const minmoreport &rhs); minmoreport& operator=(const minmoreport &rhs); virtual ~minmoreport(); ae_int_t inneriterationscount; ae_int_t outeriterationscount; ae_int_t nfev; ae_int_t terminationtype; double bcerr; ae_int_t bcidx; double lcerr; ae_int_t lcidx; double nlcerr; ae_int_t nlcidx; };
/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinMO subpackage to work with this object *************************************************************************/
class minmostate { public: minmostate(); minmostate(const minmostate &rhs); minmostate& operator=(const minmostate &rhs); virtual ~minmostate(); };
/************************************************************************* This function appends two-sided linear constraint AL <= A*x <= AU to the list of sparse constraints. Constraint is passed in the compressed format: as a list of non-zero entries of the coefficient vector A. Such approach is more efficient than the dense storage for highly sparse constraint vectors. INPUT PARAMETERS: State - structure previously allocated with minmocreate() call. IdxA - array[NNZ], indexes of non-zero elements of A: * can be unsorted * can include duplicate indexes (corresponding entries of ValA[] will be summed) ValA - array[NNZ], values of non-zero elements of A NNZ - number of non-zero coefficients in A AL, AU - lower and upper bounds; * AL=AU => equality constraint A*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint A*x<=AU * AU=+INF => one-sided constraint AL<=A*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minmoaddlc2(minmostate &state, const integer_1d_array &idxa, const real_1d_array &vala, const ae_int_t nnz, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends two-sided linear constraint AL<=A*x<=AU to dense constraints list. INPUT PARAMETERS: State - structure previously allocated with minmocreate() call. A - linear constraint coefficient, array[N], right side is NOT included. AL, AU - lower and upper bounds; * AL=AU => equality constraint Ai*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint Ai*x<=AU * AU=+INF => one-sided constraint AL<=Ai*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minmoaddlc2dense(minmostate &state, const real_1d_array &a, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends two-sided linear constraint AL <= A*x <= AU to the list of currently present sparse constraints. Constraint vector A is passed as a dense array which is internally sparsified by this function. INPUT PARAMETERS: State - structure previously allocated with minmocreate() call. DA - array[N], constraint vector AL, AU - lower and upper bounds; * AL=AU => equality constraint A*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint A*x<=AU * AU=+INF => one-sided constraint AL<=A*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minmoaddlc2sparsefromdense(minmostate &state, const real_1d_array &da, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* MULTI-OBJECTIVE OPTIMIZATION DESCRIPTION: The solver minimizes an M-dimensional vector function F(x) of N arguments subject to any combination of: * box constraints * two-sided linear equality/inequality constraints AL<=A*x<=AU, where some of AL/AU can be infinite (i.e. missing) * two-sided nonlinear equality/inequality constraints NL<=C(x)<=NU, where some of NL/NU can be infinite (i.e. missing) REQUIREMENTS: * F(), C() are continuously differentiable on the feasible set and on its neighborhood USAGE: 1. User initializes algorithm state using either: * minmocreate() to perform optimization with user-supplied Jacobian * minmocreatef() to perform optimization with numerical differentiation 2. User chooses which multi-objective solver to use. At the present moment only NBI (Normal Boundary Intersection) solver is implemented, which is activated by calling minmosetalgonbi(). 3. User adds boundary and/or linear and/or nonlinear constraints by means of calling one of the following functions: a) minmosetbc() for boundary constraints b) minmosetlc2() for two-sided sparse linear constraints; minmosetlc2dense() for two-sided dense linear constraints; minmosetlc2mixed() for two-sided mixed sparse/dense constraints c) minmosetnlc2() for two-sided nonlinear constraints You may combine (a), (b) and (c) in one optimization problem. 4. User sets scale of the variables with minmosetscale() function. It is VERY important to set scale of the variables, because nonlinearly constrained problems are hard to solve when variables are badly scaled. 5. User sets stopping conditions with minmosetcond(). 6. Finally, user calls minmooptimize() function which takes algorithm state and pointers (delegate, etc.) to the callback functions which calculate F/C 7. User calls minmoresults() to get the solution 8. Optionally user may call minmorestartfrom() to solve another problem with same M,N but another starting point. minmorestartfrom() allows to reuse an already initialized optimizer structure. INPUT PARAMETERS: N - variables count, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from the size of X M - objectives count, M>0. M=1 is possible, although makes little sense - it is better to use MinNLC directly. X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to reinforce feasibility during initial stages of the optimization OUTPUT PARAMETERS: State - structure that stores algorithm state -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmocreate(const ae_int_t n, const ae_int_t m, const real_1d_array &x, minmostate &state, const xparams _xparams = alglib::xdefault); void minmocreate(const ae_int_t m, const real_1d_array &x, minmostate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine is a finite difference variant of minmocreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on minmocreate() too. INPUT PARAMETERS: N - variables count, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from the size of X M - objectives count, M>0. M=1 is possible, although makes little sense - it is better to use MinNLC directly. X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to reinforce feasibility during initial stages of the optimization DiffStep- differentiation step, >0 OUTPUT PARAMETERS: State - structure that stores algorithm state NOTES: 1. algorithm uses 4-point central formula for differentiation. 2. differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is a scaling vector which can be set by minmosetscale() call. 3. we recommend you to use moderate values of differentiation step. Too large step means too large TRUNCATION errors, whilst too small step means too large NUMERICAL errors. 1.0E-4 can be good value to start from for a unit-scaled problem. 4. Numerical differentiation is very inefficient - one gradient calculation needs 4*N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. Imprecise gradient may slow down convergence, especially on highly nonlinear problems. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmocreatef(const ae_int_t n, const ae_int_t m, const real_1d_array &x, const double diffstep, minmostate &state, const xparams _xparams = alglib::xdefault); void minmocreatef(const ae_int_t m, const real_1d_array &x, const double diffstep, minmostate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool minmoiteration(minmostate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state fvec - callback which calculates function vector fi[] at given point x jac - callback which calculates function vector fi[] and Jacobian jac at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. This function has two different implementations: one which uses exact (analytical) user-supplied Jacobian, and one which uses only function vector and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object you should choose appropriate variant of MinMOOptimize() - one which needs function vector AND Jacobian or one which needs ONLY function. Be careful to choose variant of MinMOOptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to MinMOOptimize() and specific function used to create optimizer. | USER PASSED TO MinMOOptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ MinMOCreateF() | works FAILS MinMOCreate() | FAILS works Here "FAILS" denotes inappropriate combinations of optimizer creation function and MinMOOptimize() version. Attemps to use such combination will lead to exception. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmooptimize(minmostate &state, void (*fvec)(const real_1d_array &x, real_1d_array &fi, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void minmooptimize(minmostate &state, void (*jac)(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine submits request for the termination of the running optimizer. It should be called from the user-supplied callback when user decides that it is time to "smoothly" terminate optimization process, or from some other thread. As a result, optimizer stops at the state which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). Usually it results in an incomplete Pareto front being returned. INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmorequesttermination(minmostate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts algorithm from the new point. All optimization parameters (including constraints) are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure previously allocated with MinMOCreate call. X - new starting point. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmorestartfrom(minmostate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* MinMO results: the solution found, completion codes and additional information. INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: ParetoFront-array[FrontSize,N+M], approximate Pareto front. Its columns have the following structure: * first N columns are variable values * next M columns are objectives at these points Its rows have the following structure: * first M rows contain solutions to single-objective tasks with I-th row storing result for I-th objective being minimized ignoring other ones. Thus, ParetoFront[I,N+I] for 0<=I<M stores so called 'ideal objective vector'. * subsequent FrontSize-M rows store variables/objectives at various randomly and nearly uniformly sampled locations of the Pareto front. FrontSize- front size, >=0. * no larger than the number passed to setalgo() * for a single-objective task, FrontSize=1 is ALWAYS returned, no matter what was specified during setalgo() call. * if the solver was prematurely terminated with minnorequesttermination(), an incomplete Pareto front will be returned (it may even have less than M rows) * if a failure (negative completion code) was signaled, FrontSize=0 will be returned Rep - optimization report, contains information about completion code, constraint violation at the solution and so on. You should check rep.terminationtype in order to distinguish successful termination from unsuccessful one: === FAILURE CODES === * -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. * -3 constraint bounds are infeasible, i.e. we have box/linear/nonlinear constraint with two bounds present, and a lower one being greater than the upper one. Note: less obvious infeasibilities of constraints do NOT trigger emergency completion; you have to examine rep.bcerr/rep.lcerr/rep.nlcerr to detect possibly inconsistent constraints. === SUCCESS CODES === * 2 scaled step is no more than EpsX. * 5 MaxIts steps were taken. * 8 user requested algorithm termination via minmorequesttermination(), last accepted point is returned. More information about fields of this structure can be found in the comments on minmoreport datatype. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmoresults(const minmostate &state, real_2d_array &paretofront, ae_int_t &frontsize, minmoreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* Use the NBI (Normal Boundary Intersection) algorithm for multiobjective optimization. NBI is a simple yet powerful multiobjective optimization algorithm that has the following attractive properties: * it generates nearly uniformly distributed Pareto points * it is applicable to problems with more than 2 objectives * it naturally supports a mix of box, linear and nonlinear constraints * it is less sensitive to the bad scaling of the targets The only drawback of the algorithm is that for more than 2 objectives it can miss some small parts of the Pareto front that are located near its boundaries. INPUT PARAMETERS: State - structure which stores algorithm state FrontSize - desired Pareto front size, FrontSize>=M, where M is an objectives count PolishSolutions-whether additional solution improving phase is needed or not: * if False, the original NBI as formulated by Das and Dennis is used. It quickly produces good solutions, but these solutions can be suboptimal (usually within 0.1% of the optimal values). The reason is that the original NBI formulation does not account for degeneracies that allow significant progress for one objective with no deterioration for other objectives. * if True, the original NBI is followed by the additional solution polishing phase. This solver mode is several times slower than the original NBI, but produces better solutions. -- ALGLIB -- Copyright 20.03.2023 by Bochkanov Sergey *************************************************************************/
void minmosetalgonbi(minmostate &state, const ae_int_t frontsize, const bool polishsolutions, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets boundary constraints for the MO optimizer. Boundary constraints are inactive by default (after initial creation). They are preserved after algorithm restart with MinMORestartFrom(). You may combine boundary constraints with general linear ones - and with nonlinear ones! Boundary constraints are handled more efficiently than other types. Thus, if your problem has mixed constraints, you may explicitly specify some of them as boundary and save some time/space. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify a very small number or -INF. BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify a very large number or +INF. NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmosetbc(minmostate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping conditions for inner iterations of the optimizer. INPUT PARAMETERS: State - structure which stores algorithm state EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - step vector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinMOSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsX=0 and MaxIts=0 (simultaneously) will lead to an automatic selection of the stopping condition. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmosetcond(minmostate &state, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with sparse constraining matrix A. Recommended for large-scale problems. This function overwrites linear (non-box) constraints set by previous calls (if such calls were made). INPUT PARAMETERS: State - structure previously allocated with minmocreate() call. A - sparse matrix with size [K,N] (exactly!). Each row of A represents one general linear constraint. A can be stored in any sparse storage format. AL, AU - lower and upper bounds, array[K]; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0. If K=0 is specified, A, AL, AU are ignored. -- ALGLIB -- Copyright 01.11.2019 by Bochkanov Sergey *************************************************************************/
void minmosetlc2(minmostate &state, const sparsematrix &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t k, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with dense constraint matrix A. NOTE: knowing that constraint matrix is dense may help some MO solvers to utilize efficient dense Level 3 BLAS for dense parts of the problem. If your problem has both dense and sparse constraints, you can use minmosetlc2mixed() function. INPUT PARAMETERS: State - structure previously allocated with minmocreate() call. A - linear constraints, array[K,N]. Each row of A represents one constraint. One-sided inequality constraints, two- sided inequality constraints, equality constraints are supported (see below) AL, AU - lower and upper bounds, array[K]; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0; if not given, inferred from sizes of A, AL, AU. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmosetlc2dense(minmostate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minmosetlc2dense(minmostate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with mixed constraining matrix A including sparse part (first SparseK rows) and dense part (last DenseK rows). Recommended for large-scale problems. This function overwrites linear (non-box) constraints set by previous calls (if such calls were made). This function may be useful if constraint matrix includes large number of both types of rows - dense and sparse. If you have just a few sparse rows, you may represent them in dense format without losing performance. Similarly, if you have just a few dense rows, you can store them in the sparse format with almost same performance. INPUT PARAMETERS: State - structure previously allocated with minmocreate() call. SparseA - sparse matrix with size [K,N] (exactly!). Each row of A represents one general linear constraint. A can be stored in any sparse storage format. SparseK - number of sparse constraints, SparseK>=0 DenseA - linear constraints, array[K,N], set of dense constraints. Each row of A represents one general linear constraint. DenseK - number of dense constraints, DenseK>=0 AL, AU - lower and upper bounds, array[SparseK+DenseK], with former SparseK elements corresponding to sparse constraints, and latter DenseK elements corresponding to dense constraints; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0. If K=0 is specified, A, AL, AU are ignored. -- ALGLIB -- Copyright 01.11.2019 by Bochkanov Sergey *************************************************************************/
void minmosetlc2mixed(minmostate &state, const sparsematrix &sparsea, const ae_int_t ksparse, const real_2d_array &densea, const ae_int_t kdense, const real_1d_array &al, const real_1d_array &au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided nonlinear constraints for MinMO optimizer. In fact, this function sets only constraints COUNT and their BOUNDS. Constraints themselves (constraint functions) are passed to the MinMOOptimize() method as callbacks. MinMOOptimize() method accepts a user-defined vector function F[] and its Jacobian J[], where: * first M components of F[] and first M rows of J[] correspond to multiple objectives * subsequent NNLC components of F[] (and rows of J[]) correspond to two- sided nonlinear constraints NL<=C(x)<=NU, where * NL[i]=NU[i] => I-th row is an equality constraint Ci(x)=NL * NL[i]<NU[i] => I-th tow is a two-sided constraint NL[i]<=Ci(x)<=NU[i] * NL[i]=-INF => I-th row is an one-sided constraint Ci(x)<=NU[i] * NU[i]=+INF => I-th row is an one-sided constraint NL[i]<=Ci(x) * NL[i]=-INF, NU[i]=+INF => constraint is ignored NOTE: you may combine nonlinear constraints with linear/boundary ones. If your problem has mixed constraints, you may explicitly specify some of them as linear or box ones. It helps optimizer to handle them more efficiently. INPUT PARAMETERS: State - structure previously allocated with MinMOCreate call. NL - array[NNLC], lower bounds, can contain -INF NU - array[NNLC], lower bounds, can contain +INF NNLC - constraints count, NNLC>=0 NOTE 1: nonlinear constraints are satisfied only approximately! It is possible that the algorithm will evaluate the function outside of the feasible area! NOTE 2: algorithm scales variables according to the scale specified by MinMOSetScale() function, so it can handle problems with badly scaled variables (as long as we KNOW their scales). However, there is no way to automatically scale nonlinear constraints. Inappropriate scaling of nonlinear constraints may ruin convergence. Solving problem with constraint "1000*G0(x)=0" is NOT the same as solving it with constraint "0.001*G0(x)=0". It means that YOU are the one who is responsible for the correct scaling of the nonlinear constraints Gi(x) and Hi(x). We recommend you to scale nonlinear constraints in such a way that the Jacobian rows have approximately unit magnitude (for problems with unit scale) or have magnitude approximately equal to 1/S[i] (where S is a scale set by MinMOSetScale() function). -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmosetnlc2(minmostate &state, const real_1d_array &nl, const real_1d_array &nu, const ae_int_t nnlc, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets scaling coefficients for the MO optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Scaling is also used by finite difference variant of the optimizer - step along I-th axis is equal to DiffStep*S[I]. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmosetscale(minmostate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting of the Pareto front points. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function (if it was provided to MinMOOptimize) every time we find a Pareto front point. NOTE: according to the communication protocol used by ALGLIB, the solver passes two parameters to the rep() callback - a current point and a target value at the current point. However, because we solve a multi-objective problem, the target parameter is not used and set to zero. -- ALGLIB -- Copyright 01.03.2023 by Bochkanov Sergey *************************************************************************/
void minmosetxrep(minmostate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  multiobjective2_jac(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr)
{
    //
    // this callback calculates the bi-objective target
    //
    //     f0(x0,x1) = x0^2 + (x1-1)^2
    //     f1(x0,x1) = (x0-1(^2 + x1^2
    //
    // and Jacobian matrix J = [dfi/dxj]
    //
    fi[0] = x[0]*x[0]+(x[1]-1)*(x[1]-1);
    fi[1] = (x[0]-1)*(x[0]-1)+x[1]*x[1];
    jac[0][0] = 2*x[0];
    jac[0][1] = 2*(x[1]-1);
    jac[1][0] = 2*(x[0]-1);
    jac[1][1] = 2*x[1];
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of two targets
        //
        //     f0(x0,x1) = x0^2 + (x1-1)^2
        //     f1(x0,x1) = (x0-1(^2 + x1^2
        //
        // These targets are Euclidean distances to (0,1) and (1,0) respectively, thus solutions
        // to this problem occupy the straight line segment connecting these points. (Points
        // outside of the line are Pareto non-optimal because one can always decrease both distances
        // by moving closer to the line).
        //
        ae_int_t nvars = 2;
        ae_int_t nobjectives = 2;
        real_1d_array x0 = "[0,0]";
        ae_int_t frontsize = 10;
        bool polishsolutions = true;
        minmostate state;
        minmocreate(nvars, nobjectives, x0, state);

        //
        // The solver is configured to compute 10 points approximating the Pareto front,
        // and to polish solutions (i.e. use an additional optimization phase that improves
        // accuracy on degenerate problems; not actually necessary for this simple example).
        //
        minmosetalgonbi(state, frontsize, polishsolutions);

        //
        // Optimize and test results.
        //
        // The optimization is performed using analytic (user-provided) Jacobian matrix.
        // Use minmocreatef(), if you do not know analytic form of the Jacobian and want
        // ALGLIB to perform numerical differentiation.
        //
        // We requested 10 Pareto-optimal points and we expect solver to compute all points
        // (it is possible to return less if the solver was terminated)
        //
        minmoreport rep;
        real_2d_array paretofront;
        alglib::minmooptimize(state, multiobjective2_jac);
        minmoresults(state, paretofront, frontsize, rep);
        printf("%d\n", int(frontsize)); // EXPECTED: 10
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  multiobjective2constr_jac(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr)
{
    //
    // this callback calculates the bi-objective target
    //
    //     f0(x0,x1) = x0^2 + (x1-1)^2
    //     f1(x0,x1) = (x0-1(^2 + x1^2
    //
    // nonlinear constraint function
    //
    //     f2(x0,x1) = x0^2 + x1^2
    //
    // and Jacobian matrix J = [dfi/dxj]
    //
    fi[0] = x[0]*x[0]+(x[1]-1)*(x[1]-1);
    fi[1] = (x[0]-1)*(x[0]-1)+x[1]*x[1];
    fi[2] = x[0]*x[0]+x[1]*x[1];
    jac[0][0] = 2*x[0];
    jac[0][1] = 2*(x[1]-1);
    jac[1][0] = 2*(x[0]-1);
    jac[1][1] = 2*x[1];
    jac[2][0] = 2*x[0];
    jac[2][1] = 2*x[1];
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of two targets
        //
        //     f0(x0,x1) = x0^2 + (x1-1)^2
        //     f1(x0,x1) = (x0-1(^2 + x1^2
        //
        // subject to a nonlinear constraint
        //
        //     f2(x0,x1) = x0^2 + x1^2 >= 1
        //
        // These targets are Euclidean distances to (0,1) and (1,0) respectively, thus solutions to this
        // problem should occupy the straight line segment connecting these points. However, due to the
        // nonlinear constraint being present, Pareto front has another shape.
        //
        ae_int_t nvars = 2;
        ae_int_t nobjectives = 2;
        real_1d_array x0 = "[0,0]";
        ae_int_t frontsize = 10;
        bool polishsolutions = true;
        real_1d_array lowerbnd = "[1]";
        real_1d_array upperbnd = "[+inf]";
        minmostate state;
        minmocreate(nvars, nobjectives, x0, state);
        minmosetnlc2(state, lowerbnd, upperbnd, 1);

        //
        // The solver is configured to compute 10 points approximating the Pareto front,
        // and to polish solutions (i.e. use an additional optimization phase that improves
        // accuracy on degenerate problems; not actually necessary for this simple example).
        //
        minmosetalgonbi(state, frontsize, polishsolutions);

        //
        // Optimize and test results.
        //
        // The optimization is performed using analytic (user-provided) Jacobian matrix.
        // Use minmocreatef(), if you do not know analytic form of the Jacobian and want
        // ALGLIB to perform numerical differentiation.
        //
        // We requested 10 Pareto-optimal points and we expect solver to compute all points
        // (it is possible to return less if the solver was terminated)
        //
        minmoreport rep;
        real_2d_array paretofront;
        alglib::minmooptimize(state, multiobjective2constr_jac);
        minmoresults(state, paretofront, frontsize, rep);
        printf("%d\n", int(frontsize)); // EXPECTED: 10
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minnlcreport
minnlcstate
minnlcaddlc2
minnlcaddlc2dense
minnlcaddlc2sparsefromdense
minnlccreate
minnlccreatebuf
minnlccreatef
minnlccreatefbuf
minnlciteration
minnlcoptguardgradient
minnlcoptguardnonc1test0results
minnlcoptguardnonc1test1results
minnlcoptguardresults
minnlcoptguardsmoothness
minnlcoptimize
minnlcrequesttermination
minnlcrestartfrom
minnlcresults
minnlcresultsbuf
minnlcsetalgoaul2
minnlcsetalgoorbit
minnlcsetalgosl1qp
minnlcsetalgosl1qpbfgs
minnlcsetalgosqp
minnlcsetalgosqpbfgs
minnlcsetbc
minnlcsetcond
minnlcsetcond3
minnlcsetlc
minnlcsetlc2
minnlcsetlc2dense
minnlcsetlc2mixed
minnlcsetnlc
minnlcsetnlc2
minnlcsetnumdiff
minnlcsetscale
minnlcsetstpmax
minnlcsetxrep
minnlc_d_equality Nonlinearly constrained optimization (equality constraints)
minnlc_d_inequality Nonlinearly constrained optimization (inequality constraints)
minnlc_d_mixed Nonlinearly constrained optimization with mixed equality/inequality constraints
minnlc_d_numdiff Nonlinearly constrained optimization with numerical differentiation
minnlc_d_sparse Nonlinearly constrained optimization with sparse Jacobian
minnlc_modelbased Nonlinearly constrained optimization for expensive objectives/constraints using surrogate models
/************************************************************************* These fields store optimization report: * f objective value at the solution * iterationscount total number of inner iterations * nfev number of gradient evaluations * terminationtype termination type (see below) Scaled constraint violations are reported: * bcerr maximum violation of the box constraints * bcidx index of the most violated box constraint (or -1, if all box constraints are satisfied or there is no box constraint) * lcerr maximum violation of the linear constraints, computed as maximum scaled distance between final point and constraint boundary. * lcidx index of the most violated linear constraint (or -1, if all constraints are satisfied or there is no general linear constraints) * nlcerr maximum violation of the nonlinear constraints * nlcidx index of the most violated nonlinear constraint (or -1, if all constraints are satisfied or there is no nonlinear constraints) Violations of box constraints are scaled on per-component basis according to the scale vector s[] as specified by minnlcsetscale(). Violations of the general linear constraints are also computed using user-supplied variable scaling. Violations of nonlinear constraints are computed "as is" LAGRANGE COEFFICIENTS SQP solver (one activated by setalgosqp()/setalgosqpbfgs(), but not with legacy functions) sets the following fields (other solvers fill them by zeros): * lagbc[] array[N], Lagrange multipliers for box constraints. IMPORTANT: COEFFICIENTS FOR FIXED VARIABLES ARE SET TO ZERO. See below for an explanation. This parameter stores the same results independently of whether analytic gradient is provided or numerical differentiation is used. * lagbcnz[] array[N], Lagrange multipliers for box constraints, behaves differently depending on whether analytic gradient is provided or numerical differentiation is used: * for analytic Jacobian, lagbcnz[] contains correct coefficients for all kinds of variables - fixed or not. * for numerical Jacobian, it is the same as lagbc[], i.e. components corresponding to fixed vars are zero. See below for an explanation. * laglc[] array[Mlin], coeffs for linear constraints * lagnlc[] array[Mnlc], coeffs for nonlinear constraints Positive Lagrange coefficient means that constraint is at its upper bound. Negative coefficient means that constraint is at its lower bound. It is expected that at the solution the dual feasibility condition holds: grad + SUM(Ei*LagBC[i],i=0..n-1) + SUM(Ai*LagLC[i],i=0..mlin-1) + SUM(Ni*LagNLC[i],i=0..mnlc-1) ~ 0 (except for fixed variables which are handled specially) where * grad is a gradient at the solution * Ei is a vector with 1.0 at position I and 0 in other positions * Ai is an I-th row of linear constraint matrix * Ni is an gradient of I-th nonlinear constraint Fixed variables have two sets of Lagrange multipliers for the following reasons: * analytic gradient and numerical gradient behave differently for fixed vars. Numerical differentiation does not violate box constraints, thus gradient components corresponding to fixed vars are zero because we have no way of differentiating for these vars without violating box constraints. Contrary to that, analytic gradient usually returns correct values even for fixed vars. * ideally, we would like numerical gradient to be an almost perfect replacement for an analytic one. Thus, we need Lagrange multipliers which do not change when we change the gradient type. * on the other hand, we do not want to lose the possibility of having a full set of Lagrange multipliers for problems with analytic gradient. Thus, there is a special field lagbcnz[] whose contents depends on the information available to us. TERMINATION CODES TerminationType field contains completion code, which can be either FAILURE code, SUCCESS code, or SUCCESS code + ADDITIONAL code. The latter option is used for more detailed reporting. === FAILURE CODE === -8 internal integrity control detected infinite or NAN values in function/gradient, recovery was impossible. Abnormal termination signaled. -3 box constraints are infeasible. Note: infeasibility of non-box constraints does NOT trigger emergency completion; you have to examine bcerr/lcerr/nlcerr to detect possibly inconsistent constraints. === SUCCESS CODE === 2 relative step is no more than EpsX. 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. 8 user requested algorithm termination via minnlcrequesttermination(), last accepted point is returned === ADDITIONAL CODES === * +800 if during algorithm execution the solver encountered NAN/INF values in the target or constraints but managed to recover by reducing trust region radius, the solver returns one of SUCCESS codes but adds +800 to the code. Other fields of this structure are not documented and should not be used! *************************************************************************/
class minnlcreport { public: minnlcreport(); minnlcreport(const minnlcreport &rhs); minnlcreport& operator=(const minnlcreport &rhs); virtual ~minnlcreport(); double f; ae_int_t iterationscount; ae_int_t nfev; ae_int_t terminationtype; double bcerr; ae_int_t bcidx; double lcerr; ae_int_t lcidx; double nlcerr; ae_int_t nlcidx; real_1d_array lagbc; real_1d_array lagbcnz; real_1d_array laglc; real_1d_array lagnlc; ae_int_t dbgphase0its; };
/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinNLC subpackage to work with this object *************************************************************************/
class minnlcstate { public: minnlcstate(); minnlcstate(const minnlcstate &rhs); minnlcstate& operator=(const minnlcstate &rhs); virtual ~minnlcstate(); };
/************************************************************************* This function appends two-sided linear constraint AL <= A*x <= AU to the list of currently present sparse constraints. Constraint is passed in compressed format: as list of non-zero entries of coefficient vector A. Such approach is more efficient than dense storage for highly sparse constraint vectors. INPUT PARAMETERS: State - structure previously allocated with minnlccreate() call. IdxA - array[NNZ], indexes of non-zero elements of A: * can be unsorted * can include duplicate indexes (corresponding entries of ValA[] will be summed) ValA - array[NNZ], values of non-zero elements of A NNZ - number of non-zero coefficients in A AL, AU - lower and upper bounds; * AL=AU => equality constraint A*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint A*x<=AU * AU=+INF => one-sided constraint AL<=A*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minnlcaddlc2(minnlcstate &state, const integer_1d_array &idxa, const real_1d_array &vala, const ae_int_t nnz, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends a two-sided linear constraint AL <= A*x <= AU to the matrix of dense constraints. INPUT PARAMETERS: State - structure previously allocated with minnlccreate() call. A - linear constraint coefficient, array[N], right side is NOT included. AL, AU - lower and upper bounds; * AL=AU => equality constraint Ai*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint Ai*x<=AU * AU=+INF => one-sided constraint AL<=Ai*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 15.04.2024 by Bochkanov Sergey *************************************************************************/
void minnlcaddlc2dense(minnlcstate &state, const real_1d_array &a, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends two-sided linear constraint AL <= A*x <= AU to the list of currently present sparse constraints. Constraint vector A is passed as a dense array which is internally sparsified by this function. INPUT PARAMETERS: State - structure previously allocated with minnlccreate() call. DA - array[N], constraint vector AL, AU - lower and upper bounds; * AL=AU => equality constraint A*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint A*x<=AU * AU=+INF => one-sided constraint AL<=A*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minnlcaddlc2sparsefromdense(minnlcstate &state, const real_1d_array &da, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* NONLINEARLY CONSTRAINED OPTIMIZATION DESCRIPTION: The subroutine minimizes a function F(x) of N arguments subject to the any combination of the: * bound constraints * linear inequality constraints * linear equality constraints * nonlinear equality constraints Gi(x)=0 * nonlinear inequality constraints Hi(x)<=0 REQUIREMENTS: * the user must provide callback calculating F(), H(), G() - either both value and gradient, or merely a value (numerical differentiation will be used) * F(), G(), H() are continuously differentiable on the feasible set and its neighborhood * starting point X0, which can be infeasible USAGE: Here we give the very brief outline of the MinNLC optimizer. We strongly recommend you to study examples in the ALGLIB Reference Manual and to read ALGLIB User Guide: https://www.alglib.net/nonlinear-programming/ 1. The user initializes the solver with minnlccreate() or minnlccreatef() (the latter is used for numerical differentiation) call and chooses which NLC solver to use. In the current release the following solvers can be used: * sparse large-scale filter-based SQP solver, recommended for problems of any size (from several variables to thousands of variables). Activated with minnlcsetalgosqp() function. * dense SQP-BFGS solver, recommended for small-scale problems (up to several hundreds of variables) with a very expensive target function. Requires less function evaluations than SQP, but has more expensive iteration. Activated with minnlcsetalgosqpbfgs() function. * ORBIT, a model-based derivative free solver that uses local RBF models to optimize expensive objectives. This solver is activated with minnlcsetalgoorbit() function. * several other solvers, including legacy ones 2. [optional] user activates OptGuard integrity checker which tries to detect possible errors in the user-supplied callbacks: * discontinuity/nonsmoothness of the target/nonlinear constraints * errors in the analytic gradient provided by user This feature is essential for early prototyping stages because it helps to catch common coding and problem statement errors. OptGuard can be activated with following functions (one per each check performed): * minnlcoptguardsmoothness() * minnlcoptguardgradient() 3. User adds boundary and/or linear and/or nonlinear constraints by means of calling one of the following functions: a) minnlcsetbc() for boundary constraints b) minnlcsetlc2() for sparse two-sided linear constraints, minnlcsetlc2dense() for dense two-sided linear constraints, minnlcsetlc2mixed() for mixed sparse/dense two-sided linear constraints * minqpaddlc2dense() to add one dense row to the dense constraint submatrix * minqpaddlc2() to add one sparse row to the sparse constraint submatrix * minqpaddlc2sparsefromdense() to add one sparse row (passed as a dense array) to the sparse constraint submatrix c) minnlcsetnlc2() for nonlinear constraints You may combine (a), (b) and (c) in one optimization problem. 4. User sets scale of the variables with minnlcsetscale() function. It is VERY important to set scale of the variables, because nonlinearly constrained problems are hard to solve when variables are badly scaled. Knowing variable scales helps to check stopping criteria and precondition the solver. 5. User sets stopping conditions with minnlcsetcond3() or minnlcsetcond(). If NLC solver uses inner/outer iteration layout, this function sets stopping conditions for INNER iterations. 6. Finally, user calls minnlcoptimize() function which takes algorithm state and pointer (delegate, etc.) to callback function which calculates F/G/H. 7. User calls minnlcresults() to get solution; additionally you can retrieve OptGuard report with minnlcoptguardresults(), and get detailed report about purported errors in the target function with: * minnlcoptguardnonc1test0results() * minnlcoptguardnonc1test1results() 8. Optionally user may call minnlcrestartfrom() to solve another problem with same N but another starting point. minnlcrestartfrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size ofX X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to find feasible point first, using X as initial approximation. OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 06.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlccreate(const ae_int_t n, const real_1d_array &x, minnlcstate &state, const xparams _xparams = alglib::xdefault); void minnlccreate(const real_1d_array &x, minnlcstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* Buffered version of minnlccreate() which reuses already allocated memory as much as possible. -- ALGLIB -- Copyright 06.10.2024 by Bochkanov Sergey *************************************************************************/
void minnlccreatebuf(const ae_int_t n, const real_1d_array &x, minnlcstate &state, const xparams _xparams = alglib::xdefault); void minnlccreatebuf(const real_1d_array &x, minnlcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine is a finite difference variant of MinNLCCreate(). It uses finite differences in order to differentiate target function. Description below contains information which is specific to this function only. We recommend to read comments on MinNLCCreate() in order to get more information about creation of the NLC optimizer. CALLBACK PARALLELISM The MINNLC optimizer supports parallel parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates optimization with numerical differentiation of an expensive target functions. Callback parallelism is usually beneficial when computing a numerical gradient requires more than several milliseconds. In this case the job of computing individual gradient components can be split between multiple threads. Even inexpensive targets can benefit from parallelism, if you have many variables. ALGLIB Reference Manual, 'Working with commercial version' section, tells how to activate callback parallelism for your programming language. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size ofX X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to find feasible point first, using X as initial approximation. DiffStep- differentiation step, >0. By default, a 5-point formula is used (actually, only 4 function values per variable are used because the central one has zero coefficient due to symmetry; that's why this formula is often called a 4-point one). It can be changed with minnlcsetnumdiff() function. OUTPUT PARAMETERS: State - structure stores algorithm state NOTES: 1. the differentiation step along I-th axis is equal to DiffStep*S[I] where S[] is scaling vector which can be set by MinNLCSetScale() call. 2. we recommend you to use moderate values of differentiation step. Too large step will result in too large TRUNCATION errors, while too small step will result in too large NUMERICAL errors. 1.0E-4 can be good value to start from. 3. Numerical differentiation is very inefficient - one gradient calculation needs ~N function evaluations. This function will work for any N - either small (1...10), moderate (10...100) or large (100...). However, performance penalty will be too severe for any N's except for small ones. We should also say that code which relies on numerical differentiation is less robust and precise. Imprecise gradient may slow down convergence, especially on highly nonlinear problems or near the solution. Thus we recommend to use this function for fast prototyping on small- dimensional problems only, and to implement analytical gradient as soon as possible. -- ALGLIB -- Copyright 06.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlccreatef(const ae_int_t n, const real_1d_array &x, const double diffstep, minnlcstate &state, const xparams _xparams = alglib::xdefault); void minnlccreatef(const real_1d_array &x, const double diffstep, minnlcstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* Buffered version of minnlccreatef() which reuses already allocated memory as much as possible. -- ALGLIB -- Copyright 06.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlccreatefbuf(const ae_int_t n, const real_1d_array &x, const double diffstep, minnlcstate &state, const xparams _xparams = alglib::xdefault); void minnlccreatefbuf(const real_1d_array &x, const double diffstep, minnlcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool minnlciteration(minnlcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function activates/deactivates verification of the user-supplied analytic gradient/Jacobian. Upon activation of this option OptGuard integrity checker performs numerical differentiation of your target function (constraints) at the initial point (note: future versions may also perform check at the final point) and compares numerical gradient/Jacobian with analytic one provided by you. If difference is too large, an error flag is set and optimization session continues. After optimization session is over, you can retrieve the report which stores both gradients/Jacobians, and specific components highlighted as suspicious by the OptGuard. The primary OptGuard report can be retrieved with minnlcoptguardresults(). IMPORTANT: gradient check is a high-overhead option which will cost you about 3*N additional function evaluations. In many cases it may cost as much as the rest of the optimization session. YOU SHOULD NOT USE IT IN THE PRODUCTION CODE UNLESS YOU WANT TO CHECK DERIVATIVES PROVIDED BY SOME THIRD PARTY. NOTE: unlike previous incarnation of the gradient checking code, OptGuard does NOT interrupt optimization even if it discovers bad gradient. INPUT PARAMETERS: State - structure used to store algorithm state TestStep - verification step used for numerical differentiation: * TestStep=0 turns verification off * TestStep>0 activates verification You should carefully choose TestStep. Value which is too large (so large that function behavior is non- cubic at this scale) will lead to false alarms. Too short step will result in rounding errors dominating numerical derivative. You may use different step for different parameters by means of setting scale with minnlcsetscale(). === EXPLANATION ========================================================== In order to verify gradient algorithm performs following steps: * two trial steps are made to X[i]-TestStep*S[i] and X[i]+TestStep*S[i], where X[i] is i-th component of the initial point and S[i] is a scale of i-th parameter * F(X) is evaluated at these trial points * we perform one more evaluation in the middle point of the interval * we build cubic model using function values and derivatives at trial points and we compare its prediction with actual value in the middle point -- ALGLIB -- Copyright 15.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlcoptguardgradient(minnlcstate &state, const double teststep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #0 Nonsmoothness (non-C1) test #0 studies function values (not gradient!) obtained during line searches and monitors behavior of the directional derivative estimate. This test is less powerful than test #1, but it does not depend on the gradient values and thus it is more robust against artifacts introduced by numerical differentiation. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * fidx - is an index of the function (0 for target function, 1 or higher for nonlinear constraints) which is suspected of being "non-C1" * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], f[] - arrays of length CNT which store step lengths and function values at these points; f[i] is evaluated in x0+stp[i]*d. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #0 "strong" report lngrep - C1 test #0 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minnlcoptguardnonc1test0results(const minnlcstate &state, optguardnonc1test0report &strrep, optguardnonc1test0report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Detailed results of the OptGuard integrity check for nonsmoothness test #1 Nonsmoothness (non-C1) test #1 studies individual components of the gradient computed during line search. When precise analytic gradient is provided this test is more powerful than test #0 which works with function values and ignores user-provided gradient. However, test #0 becomes more powerful when numerical differentiation is employed (in such cases test #1 detects higher levels of numerical noise and becomes too conservative). This test also tells specific components of the gradient which violate C1 continuity, which makes it more informative than #0, which just tells that continuity is violated. Two reports are returned: * a "strongest" one, corresponding to line search which had highest value of the nonsmoothness indicator * a "longest" one, corresponding to line search which had more function evaluations, and thus is more detailed In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * fidx - is an index of the function (0 for target function, 1 or higher for nonlinear constraints) which is suspected of being "non-C1" * vidx - is an index of the variable in [0,N) with nonsmooth derivative * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], g[] - arrays of length CNT which store step lengths and gradient values at these points; g[i] is evaluated in x0+stp[i]*d and contains vidx-th component of the gradient. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. ========================================================================== = SHORTLY SPEAKING: build a 2D plot of (stp,f) and look at it - you will = see where C1 continuity is violated. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: strrep - C1 test #1 "strong" report lngrep - C1 test #1 "long" report -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minnlcoptguardnonc1test1results(minnlcstate &state, optguardnonc1test1report &strrep, optguardnonc1test1report &lngrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Results of OptGuard integrity check, should be called after optimization session is over. === PRIMARY REPORT ======================================================= OptGuard performs several checks which are intended to catch common errors in the implementation of nonlinear function/gradient: * incorrect analytic gradient * discontinuous (non-C0) target functions (constraints) * nonsmooth (non-C1) target functions (constraints) Each of these checks is activated with appropriate function: * minnlcoptguardgradient() for gradient verification * minnlcoptguardsmoothness() for C0/C1 checks Following flags are set when these errors are suspected: * rep.badgradsuspected, and additionally: * rep.badgradfidx for specific function (Jacobian row) suspected * rep.badgradvidx for specific variable (Jacobian column) suspected * rep.badgradxbase, a point where gradient/Jacobian is tested * rep.badgraduser, user-provided gradient/Jacobian * rep.badgradnum, reference gradient/Jacobian obtained via numerical differentiation * rep.nonc0suspected, and additionally: * rep.nonc0fidx - an index of specific function violating C0 continuity * rep.nonc1suspected, and additionally * rep.nonc1fidx - an index of specific function violating C1 continuity Here function index 0 means target function, index 1 or higher denotes nonlinear constraints. === ADDITIONAL REPORTS/LOGS ============================================== Several different tests are performed to catch C0/C1 errors, you can find out specific test signaled error by looking to: * rep.nonc0test0positive, for non-C0 test #0 * rep.nonc1test0positive, for non-C1 test #0 * rep.nonc1test1positive, for non-C1 test #1 Additional information (including line search logs) can be obtained by means of: * minnlcoptguardnonc1test0results() * minnlcoptguardnonc1test1results() which return detailed error reports, specific points where discontinuities were found, and so on. ========================================================================== INPUT PARAMETERS: state - algorithm state OUTPUT PARAMETERS: rep - generic OptGuard report; more detailed reports can be retrieved with other functions. NOTE: false negatives (nonsmooth problems are not identified as nonsmooth ones) are possible although unlikely. The reason is that you need to make several evaluations around nonsmoothness in order to accumulate enough information about function curvature. Say, if you start right from the nonsmooth point, optimizer simply won't get enough data to understand what is going wrong before it terminates due to abrupt changes in the derivative. It is also possible that "unlucky" step will move us to the termination too quickly. Our current approach is to have less than 0.1% false negatives in our test examples (measured with multiple restarts from random points), and to have exactly 0% false positives. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minnlcoptguardresults(minnlcstate &state, optguardreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function activates/deactivates nonsmoothness monitoring option of the OptGuard integrity checker. Smoothness monitor silently observes solution process and tries to detect ill-posed problems, i.e. ones with: a) discontinuous target function (non-C0) and/or constraints b) nonsmooth target function (non-C1) and/or constraints Smoothness monitoring does NOT interrupt optimization even if it suspects that your problem is nonsmooth. It just sets corresponding flags in the OptGuard report which can be retrieved after optimization is over. Smoothness monitoring is a moderate overhead option which often adds less than 1% to the optimizer running time. Thus, you can use it even for large scale problems. NOTE: OptGuard does NOT guarantee that it will always detect C0/C1 continuity violations. First, minor errors are hard to catch - say, a 0.0001 difference in the model values at two sides of the gap may be due to discontinuity of the model - or simply because the model has changed. Second, C1-violations are especially difficult to detect in a noninvasive way. The optimizer usually performs very short steps near the nonsmoothness, and differentiation usually introduces a lot of numerical noise. It is hard to tell whether some tiny discontinuity in the slope is due to real nonsmoothness or just due to numerical noise alone. Our top priority was to avoid false positives, so in some rare cases minor errors may went unnoticed (however, in most cases they can be spotted with restart from different initial point). INPUT PARAMETERS: state - algorithm state level - monitoring level: * 0 - monitoring is disabled * 1 - noninvasive low-overhead monitoring; function values and/or gradients are recorded, but OptGuard does not try to perform additional evaluations in order to get more information about suspicious locations. This kind of monitoring does not work well with SQP because SQP solver needs just 1-2 function evaluations per step, which is not enough for OptGuard to make any conclusions. === EXPLANATION ========================================================== One major source of headache during optimization is the possibility of the coding errors in the target function/constraints (or their gradients). Such errors most often manifest themselves as discontinuity or nonsmoothness of the target/constraints. Another frequent situation is when you try to optimize something involving lots of min() and max() operations, i.e. nonsmooth target. Although not a coding error, it is nonsmoothness anyway - and smooth optimizers usually stop right after encountering nonsmoothness, well before reaching solution. OptGuard integrity checker helps you to catch such situations: it monitors function values/gradients being passed to the optimizer and tries to errors. Upon discovering suspicious pair of points it raises appropriate flag (and allows you to continue optimization). When optimization is done, you can study OptGuard result. -- ALGLIB -- Copyright 21.11.2018 by Bochkanov Sergey *************************************************************************/
void minnlcoptguardsmoothness(minnlcstate &state, const ae_int_t level, const xparams _xparams = alglib::xdefault); void minnlcoptguardsmoothness(minnlcstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state fvec - callback which calculates function vector fi[] at given point x jac - callback which calculates function vector fi[] and Jacobian jac at given point x sjac - callback which calculates function vector fi[] and sparse Jacobian sjac at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL CALLBACK PARALLELISM The MINNLC optimizer supports parallel parallel numerical differentiation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates optimization with numerical differentiation of an expensive target functions. Callback parallelism is usually beneficial when computing a numerical gradient requires more than several milliseconds. In this case the job of computing individual gradient components can be split between multiple threads. Even inexpensive targets can benefit from parallelism, if you have many variables. ALGLIB Reference Manual, 'Working with commercial version' section, tells how to activate callback parallelism for your programming language. NOTES: 1. This function has two different implementations: one which uses exact (analytical) user-supplied Jacobian, and one which uses only function vector and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object you should choose appropriate variant of MinNLCOptimize() - one which accepts function AND Jacobian or one which accepts ONLY function. Be careful to choose variant of MinNLCOptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to MinNLCOptimize() and specific function used to create optimizer. | USER PASSED TO MinNLCOptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ MinNLCCreateF() | works FAILS MinNLCCreate() | FAILS works Here "FAILS" denotes inappropriate combinations of optimizer creation function and MinNLCOptimize() version. Attemps to use such combination will lead to exception. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 06.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlcoptimize(minnlcstate &state, void (*fvec)(const real_1d_array &x, real_1d_array &fi, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void minnlcoptimize(minnlcstate &state, void (*jac)(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void minnlcoptimize(minnlcstate &state, void (*sjac)(const real_1d_array &x, real_1d_array &fi, sparsematrix &s, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* This subroutine submits request for termination of running optimizer. It should be called from user-supplied callback when user decides that it is time to "smoothly" terminate optimization process. As result, optimizer stops at point which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 08.10.2014 by Bochkanov Sergey *************************************************************************/
void minnlcrequesttermination(minnlcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts algorithm from new point. All optimization parameters (including constraints) are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure previously allocated with MinNLCCreate call. X - new starting point. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minnlcrestartfrom(minnlcstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* MinNLC results: the solution found, completion codes and additional information. If you activated OptGuard integrity checking functionality and want to get OptGuard report, it can be retrieved with: * minnlcoptguardresults() - for a primary report about (a) suspected C0/C1 continuity violations and (b) errors in the analytic gradient. * minnlcoptguardnonc1test0results() - for C1 continuity violation test #0, detailed line search log * minnlcoptguardnonc1test1results() - for C1 continuity violation test #1, detailed line search log INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report, contains information about completion code, constraint violation at the solution and so on. rep.f contains objective value at the solution. You should check rep.terminationtype in order to distinguish successful termination from unsuccessful one: === FAILURE CODES === * -8 internal integrity control detected infinite or NAN values in function/gradient, recovery was impossible. Abnormal termination signalled. * -3 box constraints are infeasible. Note: infeasibility of non-box constraints does NOT trigger emergency completion; you have to examine rep.bcerr/rep.lcerr/rep.nlcerr to detect possibly inconsistent constraints. === SUCCESS CODES === * 2 scaled step is no more than EpsX. * 5 MaxIts steps were taken. * 8 user requested algorithm termination via minnlcrequesttermination(), last accepted point is returned. === ADDITIONAL CODES === * +800 if during algorithm execution the solver encountered NAN/INF values in the target or constraints but managed to recover by reducing trust region radius, the solver returns one of SUCCESS codes but adds +800 to the code. Some solvers (as of ALGLIB 4.02, only SQP) return Lagrange multipliers in rep.lagbc/lagbcnz, laglc, lagnlc fields. More information about fields of this structure can be found in the comments on the minnlcreport datatype. -- ALGLIB -- Copyright 18.01.2024 by Bochkanov Sergey *************************************************************************/
void minnlcresults(const minnlcstate &state, real_1d_array &x, minnlcreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* NLC results Buffered implementation of MinNLCResults() which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minnlcresultsbuf(const minnlcstate &state, real_1d_array &x, minnlcreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells MinNLC unit to use the large-scale augmented Lagrangian algorithm for nonlinearly constrained optimization. This algorithm is a significant refactoring of one described in "A Modified Barrier-Augmented Lagrangian Method for Constrained Minimization (1999)" by D.GOLDFARB, R.POLYAK, K. SCHEINBERG, I.YUZEFOVICH with the following additions: * improved sparsity support * improved handling of large-scale problems with the low rank LBFGS-based sparse preconditioner * automatic selection of the penalty parameter Rho AUL solver can be significantly faster than SQP on easy problems due to cheaper iterations, although it needs more function evaluations. On large- scale sparse problems one iteration of the AUL solver usually costs tens times less than one iteration of the SQP solver. However, the SQP solver is more robust than the AUL. In particular, it is much better at constraint enforcement and will never escape feasible area after constraints were successfully enforced. It also needs much less target function evaluations. INPUT PARAMETERS: State - structure which stores algorithm state MaxOuterIts-upper limit on outer iterations count: * MaxOuterIts=0 means that the solver will automatically choose an upper limit. Recommended value. * MaxOuterIts>1 means that the AUL solver will performs at most specified number of outer iterations -- ALGLIB -- Copyright 22.09.2023 by Bochkanov Sergey *************************************************************************/
void minnlcsetalgoaul2(minnlcstate &state, const ae_int_t maxouterits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* This function selects ORBIT solver, a model-based derivative-free solver for minimization of expensive derivative-free functions. The ORBIT algorithm by Wild and Shoemaker (2013) is an algorithm that uses objective function values to build a smooth RBF model (f=r^3) that is minimized over a trust region in order to identify step direction. The algorithm saves and reuses function values at all previously known points. ALGLIB added to the original algorithm the following modifications: * box, linear and nonlinear constraints * improved tolerance to noise in the objective/constraints Its intended area of application is a low-accuracy minimization of expensive objectives with no gradient available. It is expected that additional overhead of building and minimizing an RBF model is negligible when compared with the objective evaluation cost. Iteration overhead grows as O(N^3), so this solver is recommended for problems with N below 100. This algorithm has the following nice properties: * no parameters to tune * no convexity requirements for target function or constraints * the initial point can be infeasible * the algorithm respects box constraints in all intermediate points (it does not even evaluate the target outside of the box constrained area) * once linear and nonlinear constraints are enforced, the algorithm will try to respect them as much as possible. When compared with SQP solver, ORBIT: * is much faster than the finite-difference based serial SQP at early stages of optimization, being able to achieve 0.1-0.01 relative accuracy about 4x-10x faster than SQP solver * has slower asymptotic convergence on ill-conditioned problems, sometimes being unable to reduce error in objective or constraints below 1E-5 in a reasonable amount of time * has no obvious benefits over SQP with analytic gradient or highly parallelized (more than 10 cores) finite-difference SQP NOTE: whilst technically this algorithm supports callback parallelism, in practice it can't efficiently utilize parallel resources because it issues requests for objective/constraints in an inherently serial manner. INPUT PARAMETERS: State - structure which stores algorithm state Rad0 - initial sampling radius (multiplied by per-variable scales), Rad0>=0, zero value means automatic radius selection. An ideal value is large enough to allow significant progress by making a Rad0-sized step, but not too large (so that initial linear model well approximates the objective). Recommended values: 0.1 or 1 (assuming properly chosen variable scales. The solver can tolerate inappropriately chosen Rad0, at the expense of additional function evaluations needed to adjust it. MaxNFEV - MaxNFEV>=0, with zero value meaning no limit. This parameter allows to control computational budget (measured in function evaluations). It provides somewhat finer control than MaxIts parameter of minnlcsetcond(), which controls the maximum amount of iterations performed by the algorithm, with one iteration usually needing more than one function evaluation. The solver does not stop immediately after reaching MaxNFEV evaluations, but will stop shortly after that (usually within N+1 evaluations, often within 1-2). -- ALGLIB -- Copyright 02.10.2024 by Bochkanov Sergey *************************************************************************/
void minnlcsetalgoorbit(minnlcstate &state, const double rad0, const ae_int_t maxnfev, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function selects a legacy solver: an L1 merit function based SQP with the sparse l-BFGS update. It is recommended to use either SQP or SQP-BFGS solvers instead of this one. These solvers use filters to provide much faster and robust convergence. > -- ALGLIB -- Copyright 02.12.2019 by Bochkanov Sergey *************************************************************************/
void minnlcsetalgosl1qp(minnlcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function selects a legacy solver: an L1 merit function based SQP with the dense BFGS update. It is recommended to use either SQP or SQP-BFGS solvers instead of this one. These solvers use filters to provide much faster and robust convergence. -- ALGLIB -- Copyright 02.12.2019 by Bochkanov Sergey *************************************************************************/
void minnlcsetalgosl1qpbfgs(minnlcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function selects large-scale sparse filter-based SQP solver, the most robust solver in ALGLIB, a recommended option. This algorithm is scalable to problems with tens of thousands of variables and can efficiently handle sparsity of constraints. The convergence is proved for the following case: * function and constraints are continuously differentiable (C1 class) This algorithm has the following nice properties: * no parameters to tune * no convexity requirements for target function or constraints * the initial point can be infeasible * the algorithm respects box constraints in all intermediate points (it does not even evaluate the target outside of the box constrained area) * once linear constraints are enforced, the algorithm will not violate them * no such guarantees can be provided for nonlinear constraints, but once nonlinear constraints are enforced, the algorithm will try to respect them as much as possible * numerical differentiation does not violate box constraints (although general linear and nonlinear ones can be violated during differentiation) INPUT PARAMETERS: State - structure which stores algorithm state ===== TRACING SQP SOLVER ================================================= SQP solver supports advanced tracing capabilities. You can trace algorithm output by specifying following trace symbols (case-insensitive) by means of trace_file() call: * 'SQP' - for basic trace of algorithm steps and decisions. Only short scalars (function values and deltas) are printed. N-dimensional quantities like search directions are NOT printed. It also prints OptGuard integrity checker report when nonsmoothness of target/constraints is suspected. * 'SQP.DETAILED'- for output of points being visited and search directions This symbol also implicitly defines 'SQP'. You can control output format by additionally specifying: * nothing to output in 6-digit exponential format * 'PREC.E15' to output in 15-digit exponential format * 'PREC.F6' to output in 6-digit fixed-point format * 'SQP.PROBING' - to let algorithm insert additional function evaluations before line search in order to build human-readable chart of the raw Lagrangian (~40 additional function evaluations is performed for each line search). This symbol also implicitly defines 'SQP' and activates OptGuard integrity checker which detects continuity and smoothness violations. An OptGuard log is printed at the end of the file. By default trace is disabled and adds no overhead to the optimization process. However, specifying any of the symbols adds some formatting and output-related overhead. Specifying 'SQP.PROBING' adds even larger overhead due to additional function evaluations being performed. You may specify multiple symbols by separating them with commas: > > alglib::trace_file("SQP,SQP.PROBING,PREC.F6", "path/to/trace.log") > -- ALGLIB -- Copyright 02.12.2023 by Bochkanov Sergey *************************************************************************/
void minnlcsetalgosqp(minnlcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function selects a special solver for low-dimensional problems with expensive target function - the dense filer-based SQP-BFGS solver. This algorithm uses a dense quadratic model of the target and solves a dense QP subproblem at each step. Thus, it has difficulties scaling beyond several hundreds of variables. However, it usually needs the smallest number of the target evaluations - sometimes up to 30% less than the sparse large-scale filter-based SQP. The convergence is proved for the following case: * function and constraints are continuously differentiable (C1 class) This algorithm has the following nice properties: * no parameters to tune * no convexity requirements for target function or constraints * the initial point can be infeasible * the algorithm respects box constraints in all intermediate points (it does not even evaluate the target outside of the box constrained area) * once linear constraints are enforced, the algorithm will not violate them * no such guarantees can be provided for nonlinear constraints, but once nonlinear constraints are enforced, the algorithm will try to respect them as much as possible * numerical differentiation does not violate box constraints (although general linear and nonlinear ones can be violated during differentiation) INPUT PARAMETERS: State - structure which stores algorithm state ===== TRACING SQP SOLVER ================================================= SQP solver supports advanced tracing capabilities. You can trace algorithm output by specifying following trace symbols (case-insensitive) by means of trace_file() call: * 'SQP' - for basic trace of algorithm steps and decisions. Only short scalars (function values and deltas) are printed. N-dimensional quantities like search directions are NOT printed. It also prints OptGuard integrity checker report when nonsmoothness of target/constraints is suspected. * 'SQP.DETAILED'- for output of points being visited and search directions This symbol also implicitly defines 'SQP'. You can control output format by additionally specifying: * nothing to output in 6-digit exponential format * 'PREC.E15' to output in 15-digit exponential format * 'PREC.F6' to output in 6-digit fixed-point format * 'SQP.PROBING' - to let algorithm insert additional function evaluations before line search in order to build human-readable chart of the raw Lagrangian (~40 additional function evaluations is performed for each line search). This symbol also implicitly defines 'SQP' and activates OptGuard integrity checker which detects continuity and smoothness violations. An OptGuard log is printed at the end of the file. By default trace is disabled and adds no overhead to the optimization process. However, specifying any of the symbols adds some formatting and output-related overhead. Specifying 'SQP.PROBING' adds even larger overhead due to additional function evaluations being performed. You may specify multiple symbols by separating them with commas: > > alglib::trace_file("SQP,SQP.PROBING,PREC.F6", "path/to/trace.log") > -- ALGLIB -- Copyright 02.12.2023 by Bochkanov Sergey *************************************************************************/
void minnlcsetalgosqpbfgs(minnlcstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets boundary constraints for NLC optimizer. Boundary constraints are inactive by default (after initial creation). They are preserved after algorithm restart with MinNLCRestartFrom(). You may combine boundary constraints with general linear ones - and with nonlinear ones! Boundary constraints are handled more efficiently than other types. Thus, if your problem has mixed constraints, you may explicitly specify some of them as boundary and save some time/space. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF. BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF. NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: when you solve your problem with augmented Lagrangian solver, boundary constraints are satisfied only approximately! It is possible that algorithm will evaluate function outside of feasible area! -- ALGLIB -- Copyright 06.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlcsetbc(minnlcstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping conditions for the optimizer. This function allows to set iterations limit and step-based stopping conditions. If you want the solver to stop upon having a small change in the target, use minnlcsetcond3() function. INPUT PARAMETERS: State - structure which stores algorithm state EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - step vector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinNLCSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic selection of the stopping condition. -- ALGLIB -- Copyright 06.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlcsetcond(minnlcstate &state, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* This function sets stopping conditions for the optimizer. This function allows to set three types of stopping conditions: * iterations limit * stopping upon performing a short step (depending on the specific solver being used it may stop as soon as the first short step was made, or only after performing several sequential short steps) * stopping upon having a small change in the target (depending on the specific solver being used it may stop as soon as the first step with small change in the target was made, or only after performing several sequential steps) INPUT PARAMETERS: State - structure which stores algorithm state EpsF - >=0 The optimizer will stop as soon as the following condition is met: |f_scl(k+1)-f_scl(k)| <= max(|f_scl(k+1)|,|f_scl(k)|,1) where f_scl is an internally used by the optimizer rescaled target (ALGLIB optimizers usually apply rescaling in order to normalize target and constraints). EpsX - >=0 The subroutine finishes its work if on k+1-th iteration the condition |v|<=EpsX is fulfilled, where: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - step vector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinNLCSetScale() MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsF, EpsX=0 and MaxIts=0 (simultaneously) will lead to the automatic selection of the stopping condition. -- ALGLIB -- Copyright 21.09.2023 by Bochkanov Sergey *************************************************************************/
void minnlcsetcond3(minnlcstate &state, const double epsf, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets linear constraints for MinNLC optimizer. Linear constraints are inactive by default (after initial creation). They are preserved after algorithm restart with MinNLCRestartFrom(). You may combine linear constraints with boundary ones - and with nonlinear ones! If your problem has mixed constraints, you may explicitly specify some of them as linear. It may help optimizer to handle them more efficiently. INPUT PARAMETERS: State - structure previously allocated with MinNLCCreate call. C - linear constraints, array[K,N+1]. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. CT - type of constraints, array[K]: * if CT[i]>0, then I-th constraint is C[i,*]*x >= C[i,n+1] * if CT[i]=0, then I-th constraint is C[i,*]*x = C[i,n+1] * if CT[i]<0, then I-th constraint is C[i,*]*x <= C[i,n+1] K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT NOTE 1: when you solve your problem with augmented Lagrangian solver, linear constraints are satisfied only approximately! It is possible that algorithm will evaluate function outside of feasible area! -- ALGLIB -- Copyright 06.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlcsetlc(minnlcstate &state, const real_2d_array &c, const integer_1d_array &ct, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minnlcsetlc(minnlcstate &state, const real_2d_array &c, const integer_1d_array &ct, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with a sparse constraining matrix A. Recommended for large-scale problems. This function overwrites linear (non-box) constraints set by previous calls (if such calls were made). INPUT PARAMETERS: State - structure previously allocated with minnlccreate() call. A - sparse matrix with size [K,N] (exactly!). Each row of A represents one general linear constraint. A can be stored in any sparse storage format. AL, AU - lower and upper bounds, array[K]; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0. If K=0 is specified, A, AL, AU are ignored. -- ALGLIB -- Copyright 15.04.2024 by Bochkanov Sergey *************************************************************************/
void minnlcsetlc2(minnlcstate &state, const sparsematrix &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t k, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with dense constraint matrix A. INPUT PARAMETERS: State - structure previously allocated with minnlccreate() call. A - linear constraints, array[K,N]. Each row of A represents one constraint. One-sided inequality constraints, two- sided inequality constraints, equality constraints are supported (see below) AL, AU - lower and upper bounds, array[K]; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0; if not given, inferred from sizes of A, AL, AU. -- ALGLIB -- Copyright 15.04.2024 by Bochkanov Sergey *************************************************************************/
void minnlcsetlc2dense(minnlcstate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minnlcsetlc2dense(minnlcstate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with a mixed constraining matrix A including a sparse part (first SparseK rows) and a dense part (last DenseK rows). Recommended for large-scale problems. This function overwrites linear (non-box) constraints set by previous calls (if such calls were made). This function may be useful if constraint matrix includes large number of both types of rows - dense and sparse. If you have just a few sparse rows, you may represent them in dense format without losing performance. Similarly, if you have just a few dense rows, you may store them in sparse format with almost same performance. INPUT PARAMETERS: State - structure previously allocated with minnlccreate() call. SparseA - sparse matrix with size [K,N] (exactly!). Each row of A represents one general linear constraint. A can be stored in any sparse storage format. SparseK - number of sparse constraints, SparseK>=0 DenseA - linear constraints, array[K,N], set of dense constraints. Each row of A represents one general linear constraint. DenseK - number of dense constraints, DenseK>=0 AL, AU - lower and upper bounds, array[SparseK+DenseK], with former SparseK elements corresponding to sparse constraints, and latter DenseK elements corresponding to dense constraints; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0. If K=0 is specified, A, AL, AU are ignored. -- ALGLIB -- Copyright 15.04.2024 by Bochkanov Sergey *************************************************************************/
void minnlcsetlc2mixed(minnlcstate &state, const sparsematrix &sparsea, const ae_int_t ksparse, const real_2d_array &densea, const ae_int_t kdense, const real_1d_array &al, const real_1d_array &au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets nonlinear constraints for MinNLC optimizer. It sets constraints of the form Ci(x)=0 for i=0..NLEC-1 Ci(x)<=0 for i=NLEC..NLEC+NLIC-1 See MinNLCSetNLC2() for a modern function which allows greater flexibility in the constraint specification. -- ALGLIB -- Copyright 06.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlcsetnlc(minnlcstate &state, const ae_int_t nlec, const ae_int_t nlic, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* This function sets two-sided nonlinear constraints for MinNLC optimizer. In fact, this function sets only constraints COUNT and their BOUNDS. Constraints themselves (constraint functions) are passed to the MinNLCOptimize() method as callbacks. MinNLCOptimize() method accepts a user-defined vector function F[] and its Jacobian J[], where: * first element of F[] and first row of J[] correspond to the target * subsequent NNLC components of F[] (and rows of J[]) correspond to two- sided nonlinear constraints NL<=C(x)<=NU, where * NL[i]=NU[i] => I-th row is an equality constraint Ci(x)=NL * NL[i]<NU[i] => I-th tow is a two-sided constraint NL[i]<=Ci(x)<=NU[i] * NL[i]=-INF => I-th row is an one-sided constraint Ci(x)<=NU[i] * NU[i]=+INF => I-th row is an one-sided constraint NL[i]<=Ci(x) * NL[i]=-INF, NU[i]=+INF => constraint is ignored NOTE: you may combine nonlinear constraints with linear/boundary ones. If your problem has mixed constraints, you may explicitly specify some of them as linear or box ones. It helps optimizer to handle them more efficiently. INPUT PARAMETERS: State - structure previously allocated with MinNLCCreate call. NL - array[NNLC], lower bounds, can contain -INF NU - array[NNLC], lower bounds, can contain +INF NNLC - constraints count, NNLC>=0 NOTE 1: nonlinear constraints are satisfied only approximately! It is possible that the algorithm will evaluate the function outside of the feasible area! NOTE 2: algorithm scales variables according to the scale specified by MinNLCSetScale() function, so it can handle problems with badly scaled variables (as long as we KNOW their scales). However, there is no way to automatically scale nonlinear constraints. Inappropriate scaling of nonlinear constraints may ruin convergence. Solving problem with constraint "1000*G0(x)=0" is NOT the same as solving it with constraint "0.001*G0(x)=0". It means that YOU are the one who is responsible for the correct scaling of the nonlinear constraints Gi(x) and Hi(x). We recommend you to scale nonlinear constraints in such a way that the Jacobian rows have approximately unit magnitude (for problems with unit scale) or have magnitude approximately equal to 1/S[i] (where S is a scale set by MinNLCSetScale() function). -- ALGLIB -- Copyright 23.09.2023 by Bochkanov Sergey *************************************************************************/
void minnlcsetnlc2(minnlcstate &state, const real_1d_array &nl, const real_1d_array &nu, const ae_int_t nnlc, const xparams _xparams = alglib::xdefault); void minnlcsetnlc2(minnlcstate &state, const real_1d_array &nl, const real_1d_array &nu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* This function sets specific finite difference formula to be used for numerical differentiation. It works only for optimizers created with minnlccreatef() function; in other cases it has no effect. INPUT PARAMETERS: State - structure previously allocated with MinNLCCreateF call. FormulaType - formula type: * 5 for a 5-point formula (actually, only 4 values per variable are used, ones at x+h, x+h/2, x-h/2 and x-h; the central one has zero multiplier due to symmetry). The most precise and the most expensive option, chosen by default * 3 for a 3-point formula, which is also known as a symmetric difference quotient (the formula actually uses only two function values per variable: at x+h and x-h). A good compromise for medium-accuracy setups * 2 for a forward (or backward, depending on variable bounds) finite difference (f(x+h)-f(x))/h. This formula has the lowest accuracy. However, it is 4x faster than the 5-point formula and 2x faster than the 3-point one because, in addition to the central value f(x), it needs only one additional function evaluation per variable. -- ALGLIB -- Copyright 03.12.2024 by Bochkanov Sergey *************************************************************************/
void minnlcsetnumdiff(minnlcstate &state, const ae_int_t formulatype, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets scaling coefficients for NLC optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scales are also used by the finite difference variant of the optimizer - the step along I-th axis is equal to DiffStep*S[I]. Finally, variable scales are used for preconditioning (i.e. to speed up the solver). The scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 06.06.2014 by Bochkanov Sergey *************************************************************************/
void minnlcsetscale(minnlcstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* This function sets maximum step length (after scaling of step vector with respect to variable scales specified by minnlcsetscale() call). INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0 (default), if you don't want to limit step length. Use this subroutine when you optimize target function which contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. NOTE: different solvers employed by MinNLC optimizer may use different norms for the step. -- ALGLIB -- Copyright 02.04.2010 by Bochkanov Sergey *************************************************************************/
void minnlcsetstpmax(minnlcstate &state, const double stpmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to MinNLCOptimize(). NOTE: algorithm passes two parameters to rep() callback - current point and penalized function value at current point. Important - function value which is returned is NOT function being minimized. It is sum of the value of the function being minimized - and penalty term. -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minnlcsetxrep(minnlcstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nlcfunc1_jac(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1) = -x0+x1
    //     f1(x0,x1) = x0^2+x1^2-1
    //
    // and Jacobian matrix J = [dfi/dxj]
    //
    fi[0] = -x[0]+x[1];
    fi[1] = x[0]*x[0] + x[1]*x[1] - 1.0;
    jac[0][0] = -1.0;
    jac[0][1] = +1.0;
    jac[1][0] = 2*x[0];
    jac[1][1] = 2*x[1];
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = -x0+x1
        //
        // subject to nonlinear equality constraint
        //
        //    x0^2 + x1^2 - 1 = 0
        //
        // IMPORTANT: the   MINNLC   optimizer    supports    parallel   numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is present  in  commercial  ALGLIB  editions, greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on   minnlcoptimize()  function for
        //            more information.
        //
        real_1d_array x0 = "[1,1]";
        real_1d_array s = "[1,1]";
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        minnlcstate state;

        //
        // Create optimizer object and tune its settings:
        // * epsx=0.000001  stopping condition for inner iterations
        // * s=[1,1]        all variables have unit scale
        //
        minnlccreate(2, x0, state);
        minnlcsetcond(state, epsx, maxits);
        minnlcsetscale(state, s);

        //
        // Choose  one  of  nonlinear  programming  solvers  supported  by  MINNLC
        // optimizer.
        //
        // As of ALGLIB 4.01, the default (and recommended)  option  is to  use  a
        // large-scale filter-based SQP solver, which can utilize sparsity of  the
        // problem and uses a limited-memory BFGS update in order to  be  able  to
        // deal with thousands of variables.
        //
        // Other options include:
        // * SQP-BFGS (the same filter SQP solver relying on a dense BFGS  update,
        //   not intended for anything beyond 100 variables)
        // * ORBIT solver, a derivative-free solver  for optimization of expensive
        //   functions that are smooth, but have no gradient available
        // * AUL2 solver (a large-scale augmented  Lagrangian  solver for problems
        //   with  cheap  target  functions)
        // * SL1QP and SL1QP-BFGS legacy solvers which are similar to filter-based
        //   SQP/SQP-BFGS, but use a less  robust  L1  merit  function  to  handle
        //   constraints
        //
        minnlcsetalgosqp(state);

        //
        // Set constraints:
        //
        // Since  version  4.01,  ALGLIB  supports  the  most  general  form of
        // nonlinear constraints: two-sided   constraints  NL<=C(x)<=NU,   with
        // elements being possibly infinite (means that this specific bound  is
        // ignored). It includes equality constraints,  upper/lower  inequality
        // constraints, range constraints. In particular, the constraint
        //
        //        x0^2 + x1^2 - 1 = 0
        //
        // can be specified by passing NL=[0], NU=[0] to minnlcsetnlc2().
        //
        // Constraining functions themselves are passed as part  of  a  problem
        // Jacobian (see below).
        //
        real_1d_array nl = "[0]";
        real_1d_array nu = "[0]";
        minnlcsetnlc2(state, nl, nu);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function and its Jacobian, with first
        // component (Jacobian row) being target function, and next components
        // (Jacobian rows) being nonlinear equality and inequality constraints.
        //
        // So, our vector function has form
        //
        //     {f0,f1} = { -x0+x1 , x0^2+x1^2-1 }
        //
        // with Jacobian
        //
        //         [  -1    +1  ]
        //     J = [            ]
        //         [ 2*x0  2*x1 ]
        //
        // with f0 being target function, f1 being constraining function. Number
        // of equality/inequality constraints is specified by minnlcsetnlc2().
        //
        minnlcreport rep;
        real_1d_array x1;
        alglib::minnlcoptimize(state, nlcfunc1_jac);
        minnlcresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [0.70710,-0.70710]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nlcfunc1_jac(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1) = -x0+x1
    //     f1(x0,x1) = x0^2+x1^2-1
    //
    // and Jacobian matrix J = [dfi/dxj]
    //
    fi[0] = -x[0]+x[1];
    fi[1] = x[0]*x[0] + x[1]*x[1] - 1.0;
    jac[0][0] = -1.0;
    jac[0][1] = +1.0;
    jac[1][0] = 2*x[0];
    jac[1][1] = 2*x[1];
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = -x0+x1
        //
        // subject to box constraints
        //
        //    x0>=0, x1>=0
        //
        // and a nonlinear inequality constraint
        //
        //    x0^2 + x1^2 - 1 <= 0
        //
        // IMPORTANT: the   MINNLC   optimizer    supports    parallel   numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is present  in  commercial  ALGLIB  editions, greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on   minnlcoptimize()  function for
        //            more information.
        //
        real_1d_array x0 = "[0,0]";
        real_1d_array s = "[1,1]";
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        minnlcstate state;

        //
        // Create optimizer object and tune its settings:
        // * epsx=0.000001  stopping condition for inner iterations
        // * s=[1,1]        all variables have unit scale; it is important to
        //                  tell optimizer about scales of your variables - it
        //                  greatly accelerates convergence and helps to perform
        //                  some important integrity checks.
        //
        minnlccreate(2, x0, state);
        minnlcsetcond(state, epsx, maxits);
        minnlcsetscale(state, s);

        //
        // Choose  one  of  nonlinear  programming  solvers  supported  by  MINNLC
        // optimizer.
        //
        // As of ALGLIB 4.01, the default (and recommended)  option  is to  use  a
        // large-scale filter-based SQP solver, which can utilize sparsity of  the
        // problem and uses a limited-memory BFGS update in order to  be  able  to
        // deal with thousands of variables.
        //
        // Other options include:
        // * SQP-BFGS (the same filter SQP solver relying on a dense BFGS  update,
        //   not intended for anything beyond 100 variables)
        // * ORBIT solver, a derivative-free solver  for optimization of expensive
        //   functions that are smooth, but have no gradient available
        // * AUL2 solver (a large-scale augmented  Lagrangian  solver for problems
        //   with  cheap  target  functions)
        // * SL1QP and SL1QP-BFGS legacy solvers which are similar to filter-based
        //   SQP/SQP-BFGS, but use a less  robust  L1  merit  function  to  handle
        //   constraints
        //
        minnlcsetalgosqp(state);

        //
        // Set constraints:
        //
        // 1. box constraints are passed with minnlcsetbc() call. The  solver also
        //    supports linear constraints with minnlcsetlc().
        //
        // 2. nonlinear constraints are more tricky - you can not "pack" a general
        //    nonlinear  function  into  a  double  precision  array.  That's  why
        //    minnlcsetnlc2() does not accept constraints itself - only constraint
        //    bounds are passed.
        //
        //    Since  version  4.01,  ALGLIB  supports  the  most  general  form of
        //    nonlinear constraints: two-sided   constraints  NL<=C(x)<=NU,   with
        //    elements being possibly infinite (means that this specific bound  is
        //    ignored). It includes equality constraints,  upper/lower  inequality
        //    constraints, range constraints. In particular, the constraint
        //
        //        x0^2 + x1^2 - 1 <= 0
        //
        //    can be specified by passing NL=[-INF], NU=[0] to minnlcsetnlc2().
        //
        //    Constraining functions themselves are passed as part  of  a  problem
        //    Jacobian (see below).
        //
        real_1d_array bndl = "[0,0]";
        real_1d_array bndu = "[+inf,+inf]";
        real_1d_array nl = "[-inf]";
        real_1d_array nu = "[0]";
        minnlcsetbc(state, bndl, bndu);
        minnlcsetnlc2(state, nl, nu);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function and its Jacobian, with first
        // component (Jacobian row) being target function, and next components
        // (Jacobian rows) being nonlinear constraints.
        //
        // So, our vector function has form
        //
        //     {f0,f1} = { -x0+x1 , x0^2+x1^2-1 }
        //
        // with Jacobian
        //
        //         [  -1    +1  ]
        //     J = [            ]
        //         [ 2*x0  2*x1 ]
        //
        // with f0 being target function, f1 being constraining function. Number
        // of equality/inequality constraints is specified by minnlcsetnlc2().
        //
        minnlcreport rep;
        real_1d_array x1;
        alglib::minnlcoptimize(state, nlcfunc1_jac);
        minnlcresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [1.0000,0.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nlcfunc2_jac(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1,x2) = x0+x1
    //     f1(x0,x1,x2) = x2-exp(x0)
    //     f2(x0,x1,x2) = x0^2+x1^2-1
    //
    // and Jacobian matrix J = [dfi/dxj]
    //
    fi[0] = x[0]+x[1];
    fi[1] = x[2]-exp(x[0]);
    fi[2] = x[0]*x[0] + x[1]*x[1] - 1.0;
    jac[0][0] = 1.0;
    jac[0][1] = 1.0;
    jac[0][2] = 0.0;
    jac[1][0] = -exp(x[0]);
    jac[1][1] = 0.0;
    jac[1][2] = 1.0;
    jac[2][0] = 2*x[0];
    jac[2][1] = 2*x[1];
    jac[2][2] = 0.0;
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = x0+x1
        //
        // subject to nonlinear inequality constraint
        //
        //    x0^2 + x1^2 - 1 <= 0
        //
        // and nonlinear equality constraint
        //
        //    x2-exp(x0) = 0
        //
        // IMPORTANT: the   MINNLC   optimizer    supports    parallel   numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is present  in  commercial  ALGLIB  editions, greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on   minnlcoptimize() function  for
        //            more information.
        //
        real_1d_array x0 = "[0,0,0]";
        real_1d_array s = "[1,1,1]";
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        minnlcstate state;
        minnlcreport rep;
        real_1d_array x1;

        //
        // Create optimizer object and tune its settings:
        // * epsx=0.000001  stopping condition for inner iterations
        // * s=[1,1]        all variables have unit scale
        // * upper limit on step length is specified (to avoid probing locations where exp() is large)
        //
        minnlccreate(3, x0, state);
        minnlcsetcond(state, epsx, maxits);
        minnlcsetscale(state, s);
        minnlcsetstpmax(state, 10.0);

        //
        // Choose  one  of  nonlinear  programming  solvers  supported  by  MINNLC
        // optimizer.
        //
        // As of ALGLIB 4.01, the default (and recommended)  option  is to  use  a
        // large-scale filter-based SQP solver, which can utilize sparsity of  the
        // problem and uses a limited-memory BFGS update in order to  be  able  to
        // deal with thousands of variables.
        //
        // Other options include:
        // * SQP-BFGS (the same filter SQP solver relying on a dense BFGS  update,
        //   not intended for anything beyond 100 variables)
        // * ORBIT solver, a derivative-free solver  for optimization of expensive
        //   functions that are smooth, but have no gradient available
        // * AUL2 solver (a large-scale augmented  Lagrangian  solver for problems
        //   with  cheap  target  functions)
        // * SL1QP and SL1QP-BFGS legacy solvers which are similar to filter-based
        //   SQP/SQP-BFGS, but use a less  robust  L1  merit  function  to  handle
        //   constraints
        //
        minnlcsetalgosqp(state);

        //
        // Set constraints:
        //
        // Since  version  4.01,  ALGLIB  supports  the  most  general  form of
        // nonlinear constraints: two-sided   constraints  NL<=C(x)<=NU,   with
        // elements being possibly infinite (means that this specific bound  is
        // ignored). It includes equality constraints,  upper/lower  inequality
        // constraints, range constraints. In particular, a pair of constraints
        //
        //        x2-exp(x0)       = 0
        //        x0^2 + x1^2 - 1 <= 0
        //
        // can be specified by passing NL=[0,-INF], NU=[0,0] to minnlcsetnlc2().
        //
        // Constraining functions themselves are passed as part  of  a  problem
        // Jacobian (see below).
        //
        real_1d_array nl = "[0,-inf]";
        real_1d_array nu = "[0,0]";
        minnlcsetnlc2(state, nl, nu);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function and its Jacobian, with first
        // component (Jacobian row) being target function, and next components
        // (Jacobian rows) being nonlinear equality and inequality constraints.
        //
        // So, our vector function has form
        //
        //     {f0,f1,f2} = { x0+x1 , x2-exp(x0) , x0^2+x1^2-1 }
        //
        // with Jacobian
        //
        //         [  +1      +1       0 ]
        //     J = [-exp(x0)  0        1 ]
        //         [ 2*x0    2*x1      0 ]
        //
        // with f0 being target function, f1 being equality constraint "f1=0",
        // f2 being inequality constraint "f2<=0".
        //
        alglib::minnlcoptimize(state, nlcfunc2_jac);
        minnlcresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [-0.70710,-0.70710,0.49306]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nlcfunc2_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1,x2) = x0+x1
    //     f1(x0,x1,x2) = x2-exp(x0)
    //     f2(x0,x1,x2) = x0^2+x1^2-1
    //
    fi[0] = x[0]+x[1];
    fi[1] = x[2]-exp(x[0]);
    fi[2] = x[0]*x[0] + x[1]*x[1] - 1.0;
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = x0+x1
        //
        // subject to box constraints
        //
        //    0<=x0<+inf, -inf<x1<+int, -inf<x2<+int
        //
        // nonlinear inequality constraint
        //
        //    x0^2 + x1^2 - 1 <= 0
        //
        // and nonlinear equality constraint
        //
        //    x2-exp(x0) = 0
        //
        // using numerical differentiation.
        //
        // IMPORTANT: the   MINNLC   optimizer    supports    parallel   numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is present  in  commercial  ALGLIB  editions, greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on   minnlcoptimize()  function  for
        //            more information.
        //
        real_1d_array x0 = "[0,0,0]";
        real_1d_array s = "[1,1,1]";
        double epsx = 0.000001;
        double diffstep = 0.000001;
        ae_int_t maxits = 0;
        minnlcstate state;
        minnlcreport rep;
        real_1d_array x1;

        //
        // Create optimizer object and tune its settings:
        // * epsx=0.000001      stopping condition for inner iterations
        // * diffstep=0.000001  numerical differentiation step (times variable scales)
        // * s=[1,1]            all variables have unit scale
        //
        minnlccreatef(3, x0, diffstep, state);
        minnlcsetcond(state, epsx, maxits);
        minnlcsetscale(state, s);
        minnlcsetstpmax(state, 10.0);

        //
        // Choose  one  of  nonlinear  programming  solvers  supported  by  MINNLC
        // optimizer.
        //
        // As of ALGLIB 4.01, the default (and recommended)  option  is to  use  a
        // large-scale filter-based SQP solver, which can utilize sparsity of  the
        // problem and uses a limited-memory BFGS update in order to  be  able  to
        // deal with thousands of variables.
        //
        // Alternatively, ORBIT solver can be used. This solver uses RBF surrogate
        // models to minimize expensive objectives with  no  gradient  information
        // available.
        //
        minnlcsetalgosqp(state);

        //
        // Set box constraints. ALGLIB  respects  box  constraints  and  does  not
        // evaluate  target  outside  of  a  box-constrained  area,  even   during
        // numerical differentiation. The finite difference  formula  is  modified
        // according to the current box constraints, if necessary.
        //
        real_1d_array bndl = "[0,-inf,-inf]";
        real_1d_array bndu = "[+inf,+inf,+inf]";
        minnlcsetbc(state, bndl, bndu);

        //
        // Set nonlinear constraints:
        //
        // Since  version  4.01,  ALGLIB  supports  the  most  general  form of
        // nonlinear constraints: two-sided   constraints  NL<=C(x)<=NU,   with
        // elements being possibly infinite (means that this specific bound  is
        // ignored). It includes equality constraints,  upper/lower  inequality
        // constraints, range constraints. In particular, a pair of constraints
        //
        //        x2-exp(x0)       = 0
        //        x0^2 + x1^2 - 1 <= 0
        //
        // can be specified by passing NL=[0,-INF], NU=[0,0] to minnlcsetnlc2().
        //
        // Constraining functions themselves are passed as part  of  a  problem
        // function vector (see below).
        //
        real_1d_array nl = "[0,-inf]";
        real_1d_array nu = "[0,0]";
        minnlcsetnlc2(state, nl, nu);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function but not its Jacobian, with
        // numerical differentiation used  to  compute  Jacobian  values.  The
        // first component of the function vector is a  target  function,  and
        // the next components are nonlinear constraints.
        //
        // So, our vector function has the form
        //
        //     {f0,f1,f2} = { x0+x1 , x2-exp(x0) , x0^2+x1^2-1 }
        //
        // with f0 being target function, f1 being equality constraint "f1=0",
        // f2 being inequality constraint "f2<=0".
        //
        alglib::minnlcoptimize(state, nlcfunc2_fvec);
        minnlcresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [0,-1,1]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nlcfunc2_sjac(const real_1d_array &x, real_1d_array &fi, sparsematrix &sjac, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1,x2) = x0+x1
    //     f1(x0,x1,x2) = x2-exp(x0)
    //     f2(x0,x1,x2) = x0^2+x1^2-1
    //
    // and Jacobian matrix J = [dfi/dxj].
    //
    // This callback returns Jacobian as a sparse CRS-based matrix. This format is intended
    // for large-scale problems, it allows to solve otherwise intractable tasks with hundreds
    // of thousands of variables. It will also work for our toy problem with just three variables,
    // though.
    //
    //
    // First, we calculate function vector fi[].
    //
    fi[0] = x[0]+x[1];
    fi[1] = x[2]-exp(x[0]);
    fi[2] = x[0]*x[0] + x[1]*x[1] - 1.0;
    
    //
    // After that we initialize sparse Jacobian. On entry to this function sjac is a sparse
    // CRS matrix in a special initial state with N columns but no rows (such matrices can
    // be created with the sparsecreatecrsempty() function ).
    //
    // Such matrices can be used only for sequential addition of rows and nonzero elements.
    // You should add all rows that are expected (one for an objective and one per each
    // nonlinear constraint). Insufficient or excessive rows will be treated as an error.
    // Row elements must be added from left to right, i.e. column indexes must monotonically
    // increase.
    //
    // NOTE: you should NOT reinitialize sjac with sparsecreate() or any other function. It
    //       is important that you append rows/cols to the matrix, but do not create a new
    //       instance of the matrix object. Doing so may cause hard-to-detect errors in
    //       the present or future ALGLIB versions.
    //
    sparseappendemptyrow(sjac);
    sparseappendelement(sjac, 0, 1.0);
    sparseappendelement(sjac, 1, 1.0);
    sparseappendemptyrow(sjac);
    sparseappendelement(sjac, 0, -exp(x[0]));
    sparseappendelement(sjac, 2, 1.0);
    sparseappendemptyrow(sjac);
    sparseappendelement(sjac, 0, 2.0*x[0]);
    sparseappendelement(sjac, 1, 2.0*x[1]);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = x0+x1
        //
        // subject to nonlinear inequality constraint
        //
        //    x0^2 + x1^2 - 1 <= 0
        //
        // and nonlinear equality constraint
        //
        //    x2-exp(x0) = 0
        //
        // with their Jacobian being a sparse matrix.
        //
        // IMPORTANT: the   MINNLC   optimizer    supports    parallel   numerical
        //            differentiation  ('callback   parallelism').  This  feature,
        //            which  is present  in  commercial  ALGLIB  editions, greatly
        //            accelerates optimization with numerical  differentiation  of
        //            an expensive target functions.
        //
        //            Callback parallelism is usually  beneficial when computing a
        //            numerical gradient requires more than several  milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on   minnlcoptimize() function  for
        //            more information.
        //
        real_1d_array x0 = "[0,0,0]";
        real_1d_array s = "[1,1,1]";
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        minnlcstate state;
        minnlcreport rep;
        real_1d_array x1;

        //
        // Create optimizer object and tune its settings:
        // * epsx=0.000001  stopping condition for inner iterations
        // * s=[1,1]        all variables have unit scale
        // * upper limit on step length is specified (to avoid probing locations where exp() is large)
        //
        minnlccreate(3, x0, state);
        minnlcsetcond(state, epsx, maxits);
        minnlcsetscale(state, s);
        minnlcsetstpmax(state, 10.0);

        //
        // Choose  one  of  nonlinear  programming  solvers  supported  by  MINNLC
        // optimizer.
        //
        // As of ALGLIB 4.02, the only solver which is fully  sparse-capable  is a
        // large-scale filter-based SQP solver, which can utilize sparsity of  the
        // problem and uses a limited-memory BFGS update in order to  be  able  to
        // deal with thousands of variables.
        //
        minnlcsetalgosqp(state);

        //
        // Set constraints:
        //
        // Since  version  4.01,  ALGLIB  supports  the  most  general  form of
        // nonlinear constraints: two-sided   constraints  NL<=C(x)<=NU,   with
        // elements being possibly infinite (means that this specific bound  is
        // ignored). It includes equality constraints,  upper/lower  inequality
        // constraints, range constraints. In particular, a pair of constraints
        //
        //        x2-exp(x0)       = 0
        //        x0^2 + x1^2 - 1 <= 0
        //
        // can be specified by passing NL=[0,-INF], NU=[0,0] to minnlcsetnlc2().
        //
        // Constraining functions themselves are passed as part  of  a  problem
        // Jacobian (see below).
        //
        real_1d_array nl = "[0,-inf]";
        real_1d_array nu = "[0,0]";
        minnlcsetnlc2(state, nl, nu);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function and its Jacobian, with first
        // component (Jacobian row) being target function, and next components
        // (Jacobian rows) being nonlinear equality and inequality constraints.
        //
        // So, our vector function has form
        //
        //     {f0,f1,f2} = { x0+x1 , x2-exp(x0) , x0^2+x1^2-1 }
        //
        // with Jacobian
        //
        //         [  +1      +1       0 ]
        //     J = [-exp(x0)  0        1 ]
        //         [ 2*x0    2*x1      0 ]
        //
        // with f0 being target function, f1 being equality constraint "f1=0",
        // f2 being inequality constraint "f2<=0". The Jacobian is store as a
        // sparse matrix. See comments on the callback for  more  information
        // about working with sparse Jacobians.
        //
        alglib::minnlcoptimize(state, nlcfunc2_sjac);
        minnlcresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [-0.70710,-0.70710,0.49306]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nlcfunc2_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1,x2) = x0+x1
    //     f1(x0,x1,x2) = x2-exp(x0)
    //     f2(x0,x1,x2) = x0^2+x1^2-1
    //
    fi[0] = x[0]+x[1];
    fi[1] = x[2]-exp(x[0]);
    fi[2] = x[0]*x[0] + x[1]*x[1] - 1.0;
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = x0+x1
        //
        // subject to nonlinears constraints
        //
        //    x0^2 + x1^2 - 1 <= 0
        //    x2-exp(x0)       = 0
        //
        // using surrogate RBF models.
        //
        // This optimization mode is intended for expensive objectives/constraints
        // that  lack   derivative   information  (e.g.  obtained  from  numerical
        // simulation or by observing some physical  system).  It  provides  rapid
        // convergence to medium-quality approximate solutions (error in objective
        // being as low as 1% or 0.1%).
        //
        real_1d_array x0 = "[0,0,0]";
        real_1d_array s = "[1,1,1]";
        double epsx = 0.000001;
        ae_int_t maxits = 0;
        minnlcstate state;
        minnlcreport rep;
        real_1d_array x1;

        //
        // Create optimizer object and tune its settings:
        // * epsx=0.000001  stopping condition for inner iterations
        // * s=[1,1]        all variables have unit scale
        // * set ORBIT solver
        //
        minnlccreate(3, x0, state);
        minnlcsetcond(state, epsx, maxits);
        minnlcsetscale(state, s);
        minnlcsetalgoorbit(state, 0, 0);

        //
        // Set constraints:
        //
        // Since  version  4.01,  ALGLIB  supports  the  most  general  form of
        // nonlinear constraints: two-sided   constraints  NL<=C(x)<=NU,   with
        // elements being possibly infinite (means that this specific bound  is
        // ignored). It includes equality constraints,  upper/lower  inequality
        // constraints, range constraints. In particular, a pair of constraints
        //
        //        x2-exp(x0)       = 0
        //        x0^2 + x1^2 - 1 <= 0
        //
        // can be specified by passing NL=[0,-INF], NU=[0,0] to minnlcsetnlc2().
        //
        // Constraining functions themselves are passed  as  part  of  a  function
        // vector (see below).
        //
        real_1d_array nl = "[0,-inf]";
        real_1d_array nu = "[0,0]";
        minnlcsetnlc2(state, nl, nu);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function (no derivative information  is
        // needed), with first component being objective  and  next  2  components
        // being nonlinear constraints.
        //
        // So, our vector function has form
        //
        //     {f0,f1,f2} = { x0+x1 , x2-exp(x0) , x0^2+x1^2-1 }
        //
        // with f0 being objective function, f1 being equality constraint "f1=0",
        // f2 being inequality constraint "f2<=0".
        //
        alglib::minnlcoptimize(state, nlcfunc2_fvec);
        minnlcresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [-0.70710,-0.70710,0.49306]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minnsreport
minnsstate
minnscreate
minnscreatef
minnsiteration
minnsoptimize
minnsrequesttermination
minnsrestartfrom
minnsresults
minnsresultsbuf
minnssetalgoags
minnssetbc
minnssetcond
minnssetlc
minnssetnlc
minnssetscale
minnssetxrep
minns_d_bc Nonsmooth box constrained optimization
minns_d_diff Nonsmooth unconstrained optimization with numerical differentiation
minns_d_nlc Nonsmooth nonlinearly constrained optimization
minns_d_unconstrained Nonsmooth unconstrained optimization
/************************************************************************* This structure stores optimization report: * IterationsCount total number of inner iterations * NFEV number of gradient evaluations * TerminationType termination type (see below) * CErr maximum violation of all types of constraints * LCErr maximum violation of linear constraints * NLCErr maximum violation of nonlinear constraints TERMINATION CODES TerminationType field contains completion code, which can be: -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. -3 box constraints are inconsistent -1 inconsistent parameters were passed: * penalty parameter for minnssetalgoags() is zero, but we have nonlinear constraints set by minnssetnlc() 2 sampling radius decreased below epsx 5 MaxIts steps was taken 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. 8 User requested termination via MinNSRequestTermination() Other fields of this structure are not documented and should not be used! *************************************************************************/
class minnsreport { public: minnsreport(); minnsreport(const minnsreport &rhs); minnsreport& operator=(const minnsreport &rhs); virtual ~minnsreport(); ae_int_t iterationscount; ae_int_t nfev; double cerr; double lcerr; double nlcerr; ae_int_t terminationtype; ae_int_t varidx; ae_int_t funcidx; };
/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinNS subpackage to work with this object *************************************************************************/
class minnsstate { public: minnsstate(); minnsstate(const minnsstate &rhs); minnsstate& operator=(const minnsstate &rhs); virtual ~minnsstate(); };
/************************************************************************* NONSMOOTH NONCONVEX OPTIMIZATION SUBJECT TO BOX/LINEAR/NONLINEAR-NONSMOOTH CONSTRAINTS DESCRIPTION: The subroutine minimizes function F(x) of N arguments subject to any combination of: * bound constraints * linear inequality constraints * linear equality constraints * nonlinear equality constraints Gi(x)=0 * nonlinear inequality constraints Hi(x)<=0 IMPORTANT: see MinNSSetAlgoAGS for important information on performance restrictions of AGS solver. REQUIREMENTS: * starting point X0 must be feasible or not too far away from the feasible set * F(), G(), H() are continuous, locally Lipschitz and continuously (but not necessarily twice) differentiable in an open dense subset of R^N. Functions F(), G() and H() may be nonsmooth and non-convex. Informally speaking, it means that functions are composed of large differentiable "patches" with nonsmoothness having place only at the boundaries between these "patches". Most real-life nonsmooth functions satisfy these requirements. Say, anything which involves finite number of abs(), min() and max() is very likely to pass the test. Say, it is possible to optimize anything of the following: * f=abs(x0)+2*abs(x1) * f=max(x0,x1) * f=sin(max(x0,x1)+abs(x2)) * for nonlinearly constrained problems: F() must be bounded from below without nonlinear constraints (this requirement is due to the fact that, contrary to box and linear constraints, nonlinear ones require special handling). * user must provide function value and gradient for F(), H(), G() at all points where function/gradient can be calculated. If optimizer requires value exactly at the boundary between "patches" (say, at x=0 for f=abs(x)), where gradient is not defined, user may resolve tie arbitrarily (in our case - return +1 or -1 at its discretion). * NS solver supports numerical differentiation, i.e. it may differentiate your function for you, but it results in 2N increase of function evaluations. Not recommended unless you solve really small problems. See minnscreatef() for more information on this functionality. USAGE: 1. User initializes algorithm state with MinNSCreate() call and chooses what NLC solver to use. There is some solver which is used by default, with default settings, but you should NOT rely on default choice. It may change in future releases of ALGLIB without notice, and no one can guarantee that new solver will be able to solve your problem with default settings. From the other side, if you choose solver explicitly, you can be pretty sure that it will work with new ALGLIB releases. In the current release following solvers can be used: * AGS solver (activated with MinNSSetAlgoAGS() function) 2. User adds boundary and/or linear and/or nonlinear constraints by means of calling one of the following functions: a) MinNSSetBC() for boundary constraints b) MinNSSetLC() for linear constraints c) MinNSSetNLC() for nonlinear constraints You may combine (a), (b) and (c) in one optimization problem. 3. User sets scale of the variables with MinNSSetScale() function. It is VERY important to set scale of the variables, because nonlinearly constrained problems are hard to solve when variables are badly scaled. 4. User sets stopping conditions with MinNSSetCond(). 5. Finally, user calls MinNSOptimize() function which takes algorithm state and pointer (delegate, etc) to callback function which calculates F/G/H. 7. User calls MinNSResults() to get solution 8. Optionally user may call MinNSRestartFrom() to solve another problem with same N but another starting point. MinNSRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to find feasible point first, using X as initial approximation. OUTPUT PARAMETERS: State - structure stores algorithm state NOTE: minnscreatef() function may be used if you do not have analytic gradient. This function creates solver which uses numerical differentiation with user-specified step. -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnscreate(const ae_int_t n, const real_1d_array &x, minnsstate &state, const xparams _xparams = alglib::xdefault); void minnscreate(const real_1d_array &x, minnsstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* Version of minnscreatef() which uses numerical differentiation. I.e., you do not have to calculate derivatives yourself. However, this version needs 2N times more function evaluations. 2-point differentiation formula is used, because more precise 4-point formula is unstable when used on non-smooth functions. INPUT PARAMETERS: N - problem dimension, N>0: * if given, only leading N elements of X are used * if not given, automatically determined from size of X X - starting point, array[N]: * it is better to set X to a feasible point * but X can be infeasible, in which case algorithm will try to find feasible point first, using X as initial approximation. DiffStep- differentiation step, DiffStep>0. Algorithm performs numerical differentiation with step for I-th variable being equal to DiffStep*S[I] (here S[] is a scale vector, set by minnssetscale() function). Do not use too small steps, because it may lead to catastrophic cancellation during intermediate calculations. OUTPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnscreatef(const ae_int_t n, const real_1d_array &x, const double diffstep, minnsstate &state, const xparams _xparams = alglib::xdefault); void minnscreatef(const real_1d_array &x, const double diffstep, minnsstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool minnsiteration(minnsstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state fvec - callback which calculates function vector fi[] at given point x jac - callback which calculates function vector fi[] and Jacobian jac at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL NOTES: 1. This function has two different implementations: one which uses exact (analytical) user-supplied Jacobian, and one which uses only function vector and numerically differentiates function in order to obtain gradient. Depending on the specific function used to create optimizer object you should choose appropriate variant of minnsoptimize() - one which accepts function AND Jacobian or one which accepts ONLY function. Be careful to choose variant of minnsoptimize() which corresponds to your optimization scheme! Table below lists different combinations of callback (function/gradient) passed to minnsoptimize() and specific function used to create optimizer. | USER PASSED TO minnsoptimize() CREATED WITH | function only | function and gradient ------------------------------------------------------------ minnscreatef() | works FAILS minnscreate() | FAILS works Here "FAILS" denotes inappropriate combinations of optimizer creation function and minnsoptimize() version. Attemps to use such combination will lead to exception. Either you did not pass gradient when it WAS needed or you passed gradient when it was NOT needed. -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnsoptimize(minnsstate &state, void (*fvec)(const real_1d_array &x, real_1d_array &fi, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void minnsoptimize(minnsstate &state, void (*jac)(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This subroutine submits request for termination of running optimizer. It should be called from user-supplied callback when user decides that it is time to "smoothly" terminate optimization process. As result, optimizer stops at point which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnsrequesttermination(minnsstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts algorithm from new point. All optimization parameters (including constraints) are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure previously allocated with minnscreate() call. X - new starting point. -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnsrestartfrom(minnsstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* MinNS results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report. You should check Rep.TerminationType in order to distinguish successful termination from unsuccessful one: * -8 internal integrity control detected infinite or NAN values in function/gradient. Abnormal termination signalled. * -3 box constraints are inconsistent * -1 inconsistent parameters were passed: * penalty parameter for minnssetalgoags() is zero, but we have nonlinear constraints set by minnssetnlc() * 2 sampling radius decreased below epsx * 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. * 8 User requested termination via minnsrequesttermination() -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnsresults(const minnsstate &state, real_1d_array &x, minnsreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* Buffered implementation of minnsresults() which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnsresultsbuf(const minnsstate &state, real_1d_array &x, minnsreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells MinNS unit to use AGS (adaptive gradient sampling) algorithm for nonsmooth constrained optimization. This algorithm is a slight modification of one described in "An Adaptive Gradient Sampling Algorithm for Nonsmooth Optimization" by Frank E. Curtisy and Xiaocun Quez. This optimizer has following benefits and drawbacks: + robustness; it can be used with nonsmooth and nonconvex functions. + relatively easy tuning; most of the metaparameters are easy to select. - it has convergence of steepest descent, slower than CG/LBFGS. - each iteration involves evaluation of ~2N gradient values and solution of 2Nx2N quadratic programming problem, which limits applicability of algorithm by small-scale problems (up to 50-100). IMPORTANT: this algorithm has convergence guarantees, i.e. it will steadily move towards some stationary point of the function. However, "stationary point" does not always mean "solution". Nonsmooth problems often have "flat spots", i.e. areas where function do not change at all. Such "flat spots" are stationary points by definition, and algorithm may be caught here. Nonsmooth CONVEX tasks are not prone to this problem. Say, if your function has form f()=MAX(f0,f1,...), and f_i are convex, then f() is convex too and you have guaranteed convergence to solution. INPUT PARAMETERS: State - structure which stores algorithm state Radius - initial sampling radius, >=0. Internally multiplied by vector of per-variable scales specified by minnssetscale()). You should select relatively large sampling radius, roughly proportional to scaled length of the first steps of the algorithm. Something close to 0.1 in magnitude should be good for most problems. AGS solver can automatically decrease radius, so too large radius is not a problem (assuming that you won't choose so large radius that algorithm will sample function in too far away points, where gradient value is irrelevant). Too small radius won't cause algorithm to fail, but it may slow down algorithm (it may have to perform too short steps). Penalty - penalty coefficient for nonlinear constraints: * for problem with nonlinear constraints should be some problem-specific positive value, large enough that penalty term changes shape of the function. Starting from some problem-specific value penalty coefficient becomes large enough to exactly enforce nonlinear constraints; larger values do not improve precision. Increasing it too much may slow down convergence, so you should choose it carefully. * can be zero for problems WITHOUT nonlinear constraints (i.e. for unconstrained ones or ones with just box or linear constraints) * if you specify zero value for problem with at least one nonlinear constraint, algorithm will terminate with error code -1. ALGORITHM OUTLINE The very basic outline of unconstrained AGS algorithm is given below: 0. If sampling radius is below EpsX or we performed more then MaxIts iterations - STOP. 1. sample O(N) gradient values at random locations around current point; informally speaking, this sample is an implicit piecewise linear model of the function, although algorithm formulation does not mention that explicitly 2. solve quadratic programming problem in order to find descent direction 3. if QP solver tells us that we are near solution, decrease sampling radius and move to (0) 4. perform backtracking line search 5. after moving to new point, goto (0) Constraint handling details: * box constraints are handled exactly by algorithm * linear/nonlinear constraints are handled by adding L1 penalty. Because our solver can handle nonsmoothness, we can use L1 penalty function, which is an exact one (i.e. exact solution is returned under such penalty). * penalty coefficient for linear constraints is chosen automatically; however, penalty coefficient for nonlinear constraints must be specified by user. ===== TRACING AGS SOLVER ================================================= AGS solver supports advanced tracing capabilities. You can trace algorithm output by specifying following trace symbols (case-insensitive) by means of trace_file() call: * 'AGS' - for basic trace of algorithm steps and decisions. Only short scalars (function values and deltas) are printed. N-dimensional quantities like search directions are NOT printed. * 'AGS.DETAILED'- for output of points being visited and search directions This symbol also implicitly defines 'AGS'. You can control output format by additionally specifying: * nothing to output in 6-digit exponential format * 'PREC.E15' to output in 15-digit exponential format * 'PREC.F6' to output in 6-digit fixed-point format * 'AGS.DETAILED.SAMPLE'- for output of points being visited , search directions and gradient sample. May take a LOT of space , do not use it on problems with more that several tens of vars. This symbol also implicitly defines 'AGS' and 'AGS.DETAILED'. By default trace is disabled and adds no overhead to the optimization process. However, specifying any of the symbols adds some formatting and output-related overhead. You may specify multiple symbols by separating them with commas: > > alglib::trace_file("AGS,PREC.F6", "path/to/trace.log") > -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnssetalgoags(minnsstate &state, const double radius, const double penalty, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This function sets boundary constraints. Boundary constraints are inactive by default (after initial creation). They are preserved after algorithm restart with minnsrestartfrom(). INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF. BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF. NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: AGS solver has following useful properties: * bound constraints are always satisfied exactly * function is evaluated only INSIDE area specified by bound constraints, even when numerical differentiation is used (algorithm adjusts nodes according to boundary constraints) -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnssetbc(minnsstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets stopping conditions for iterations of optimizer. INPUT PARAMETERS: State - structure which stores algorithm state EpsX - >=0 The AGS solver finishes its work if on k+1-th iteration sampling radius decreases below EpsX. MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection. We do not recommend you to rely on default choice in production code. -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnssetcond(minnsstate &state, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This function sets linear constraints. Linear constraints are inactive by default (after initial creation). They are preserved after algorithm restart with minnsrestartfrom(). INPUT PARAMETERS: State - structure previously allocated with minnscreate() call. C - linear constraints, array[K,N+1]. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. CT - type of constraints, array[K]: * if CT[i]>0, then I-th constraint is C[i,*]*x >= C[i,n+1] * if CT[i]=0, then I-th constraint is C[i,*]*x = C[i,n+1] * if CT[i]<0, then I-th constraint is C[i,*]*x <= C[i,n+1] K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT NOTE: linear (non-bound) constraints are satisfied only approximately: * there always exists some minor violation (about current sampling radius in magnitude during optimization, about EpsX in the solution) due to use of penalty method to handle constraints. * numerical differentiation, if used, may lead to function evaluations outside of the feasible area, because algorithm does NOT change numerical differentiation formula according to linear constraints. If you want constraints to be satisfied exactly, try to reformulate your problem in such manner that all constraints will become boundary ones (this kind of constraints is always satisfied exactly, both in the final solution and in all intermediate points). -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnssetlc(minnsstate &state, const real_2d_array &c, const integer_1d_array &ct, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minnssetlc(minnsstate &state, const real_2d_array &c, const integer_1d_array &ct, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets nonlinear constraints. In fact, this function sets NUMBER of nonlinear constraints. Constraints itself (constraint functions) are passed to minnsoptimize() method. This method requires user-defined vector function F[] and its Jacobian J[], where: * first component of F[] and first row of Jacobian J[] correspond to function being minimized * next NLEC components of F[] (and rows of J) correspond to nonlinear equality constraints G_i(x)=0 * next NLIC components of F[] (and rows of J) correspond to nonlinear inequality constraints H_i(x)<=0 NOTE: you may combine nonlinear constraints with linear/boundary ones. If your problem has mixed constraints, you may explicitly specify some of them as linear ones. It may help optimizer to handle them more efficiently. INPUT PARAMETERS: State - structure previously allocated with minnscreate() call. NLEC - number of Non-Linear Equality Constraints (NLEC), >=0 NLIC - number of Non-Linear Inquality Constraints (NLIC), >=0 NOTE 1: nonlinear constraints are satisfied only approximately! It is possible that algorithm will evaluate function outside of the feasible area! NOTE 2: algorithm scales variables according to scale specified by minnssetscale() function, so it can handle problems with badly scaled variables (as long as we KNOW their scales). However, there is no way to automatically scale nonlinear constraints Gi(x) and Hi(x). Inappropriate scaling of Gi/Hi may ruin convergence. Solving problem with constraint "1000*G0(x)=0" is NOT same as solving it with constraint "0.001*G0(x)=0". It means that YOU are the one who is responsible for correct scaling of nonlinear constraints Gi(x) and Hi(x). We recommend you to scale nonlinear constraints in such way that I-th component of dG/dX (or dH/dx) has approximately unit magnitude (for problems with unit scale) or has magnitude approximately equal to 1/S[i] (where S is a scale set by minnssetscale() function). NOTE 3: nonlinear constraints are always hard to handle, no matter what algorithm you try to use. Even basic box/linear constraints modify function curvature by adding valleys and ridges. However, nonlinear constraints add valleys which are very hard to follow due to their "curved" nature. It means that optimization with single nonlinear constraint may be significantly slower than optimization with multiple linear ones. It is normal situation, and we recommend you to carefully choose Rho parameter of minnssetalgoags(), because too large value may slow down convergence. -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnssetnlc(minnsstate &state, const ae_int_t nlec, const ae_int_t nlic, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets scaling coefficients for NLC optimizer. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Scaling is also used by finite difference variant of the optimizer - step along I-th axis is equal to DiffStep*S[I]. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 18.05.2015 by Bochkanov Sergey *************************************************************************/
void minnssetscale(minnsstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to minnsoptimize(). -- ALGLIB -- Copyright 28.11.2010 by Bochkanov Sergey *************************************************************************/
void minnssetxrep(minnsstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nsfunc1_jac(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1) = 2*|x0|+x1
    //
    // and Jacobian matrix J = [df0/dx0 df0/dx1]
    //
    fi[0] = 2*fabs(double(x[0]))+fabs(double(x[1]));
    jac[0][0] = 2*alglib::sign(x[0]);
    jac[0][1] = alglib::sign(x[1]);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = 2*|x0|+|x1|
        //
        // subject to box constraints
        //
        //        1 <= x0 < +INF
        //     -INF <= x1 < +INF
        //
        // using nonsmooth nonlinear optimizer.
        //
        real_1d_array x0 = "[1,1]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[1,-inf]";
        real_1d_array bndu = "[+inf,+inf]";
        double epsx = 0.00001;
        double radius = 0.1;
        double rho = 0.0;
        ae_int_t maxits = 0;
        minnsstate state;
        minnsreport rep;
        real_1d_array x1;

        //
        // Create optimizer object, choose AGS algorithm and tune its settings:
        // * radius=0.1     good initial value; will be automatically decreased later.
        // * rho=0.0        penalty coefficient for nonlinear constraints; can be zero
        //                  because we do not have such constraints
        // * epsx=0.000001  stopping conditions
        // * s=[1,1]        all variables have unit scale
        //
        minnscreate(2, x0, state);
        minnssetalgoags(state, radius, rho);
        minnssetcond(state, epsx, maxits);
        minnssetscale(state, s);

        //
        // Set box constraints.
        //
        // General linear constraints are set in similar way (see comments on
        // minnssetlc() function for more information).
        //
        // You may combine box, linear and nonlinear constraints in one optimization
        // problem.
        //
        minnssetbc(state, bndl, bndu);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function and its Jacobian, with first
        // component (Jacobian row) being target function, and next components
        // (Jacobian rows) being nonlinear equality and inequality constraints
        // (box/linear ones are passed separately by means of minnssetbc() and
        // minnssetlc() calls).
        //
        // If you do not have nonlinear constraints (exactly our situation), then
        // you will have one-component function vector and 1xN Jacobian matrix.
        //
        // So, our vector function has form
        //
        //     {f0} = { 2*|x0|+|x1| }
        //
        // with Jacobian
        //
        //         [                       ]
        //     J = [ 2*sign(x0)   sign(x1) ]
        //         [                       ]
        //
        // NOTE: nonsmooth optimizer requires considerably more function
        //       evaluations than smooth solver - about 2N times more. Using
        //       numerical differentiation introduces additional (multiplicative)
        //       2N speedup.
        //
        //       It means that if smooth optimizer WITH user-supplied gradient
        //       needs 100 function evaluations to solve 50-dimensional problem,
        //       then AGS solver with user-supplied gradient will need about 10.000
        //       function evaluations, and with numerical gradient about 1.000.000
        //       function evaluations will be performed.
        //
        // NOTE: AGS solver used by us can handle nonsmooth and nonconvex
        //       optimization problems. It has convergence guarantees, i.e. it will
        //       converge to stationary point of the function after running for some
        //       time.
        //
        //       However, it is important to remember that "stationary point" is not
        //       equal to "solution". If your problem is convex, everything is OK.
        //       But nonconvex optimization problems may have "flat spots" - large
        //       areas where gradient is exactly zero, but function value is far away
        //       from optimal. Such areas are stationary points too, and optimizer
        //       may be trapped here.
        //
        //       "Flat spots" are nonsmooth equivalent of the saddle points, but with
        //       orders of magnitude worse properties - they may be quite large and
        //       hard to avoid. All nonsmooth optimizers are prone to this kind of the
        //       problem, because it is impossible to automatically distinguish "flat
        //       spot" from true solution.
        //
        //       This note is here to warn you that you should be very careful when
        //       you solve nonsmooth optimization problems. Visual inspection of
        //       results is essential.
        //
        //
        alglib::minnsoptimize(state, nsfunc1_jac);
        minnsresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [1.0000,0.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nsfunc1_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1) = 2*|x0|+x1
    //
    fi[0] = 2*fabs(double(x[0]))+fabs(double(x[1]));
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = 2*|x0|+|x1|
        //
        // using nonsmooth nonlinear optimizer with numerical
        // differentiation provided by ALGLIB.
        //
        // NOTE: nonsmooth optimizer requires considerably more function
        //       evaluations than smooth solver - about 2N times more. Using
        //       numerical differentiation introduces additional (multiplicative)
        //       2N speedup.
        //
        //       It means that if smooth optimizer WITH user-supplied gradient
        //       needs 100 function evaluations to solve 50-dimensional problem,
        //       then AGS solver with user-supplied gradient will need about 10.000
        //       function evaluations, and with numerical gradient about 1.000.000
        //       function evaluations will be performed.
        //
        real_1d_array x0 = "[1,1]";
        real_1d_array s = "[1,1]";
        double epsx = 0.00001;
        double diffstep = 0.000001;
        double radius = 0.1;
        double rho = 0.0;
        ae_int_t maxits = 0;
        minnsstate state;
        minnsreport rep;
        real_1d_array x1;

        //
        // Create optimizer object, choose AGS algorithm and tune its settings:
        // * radius=0.1     good initial value; will be automatically decreased later.
        // * rho=0.0        penalty coefficient for nonlinear constraints; can be zero
        //                  because we do not have such constraints
        // * epsx=0.000001  stopping conditions
        // * s=[1,1]        all variables have unit scale
        //
        minnscreatef(2, x0, diffstep, state);
        minnssetalgoags(state, radius, rho);
        minnssetcond(state, epsx, maxits);
        minnssetscale(state, s);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function, with first component
        // being target function, and next components being nonlinear equality
        // and inequality constraints (box/linear ones are passed separately
        // by means of minnssetbc() and minnssetlc() calls).
        //
        // If you do not have nonlinear constraints (exactly our situation), then
        // you will have one-component function vector.
        //
        // So, our vector function has form
        //
        //     {f0} = { 2*|x0|+|x1| }
        //
        alglib::minnsoptimize(state, nsfunc1_fvec);
        minnsresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [0.0000,0.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nsfunc2_jac(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr)
{
    //
    // this callback calculates function vector
    //
    //     f0(x0,x1) = 2*|x0|+x1
    //     f1(x0,x1) = x0-1
    //     f2(x0,x1) = -x1-1
    //
    // and Jacobian matrix J
    //
    //         [ df0/dx0   df0/dx1 ]
    //     J = [ df1/dx0   df1/dx1 ]
    //         [ df2/dx0   df2/dx1 ]
    //
    fi[0] = 2*fabs(double(x[0]))+fabs(double(x[1]));
    jac[0][0] = 2*alglib::sign(x[0]);
    jac[0][1] = alglib::sign(x[1]);
    fi[1] = x[0]-1;
    jac[1][0] = 1;
    jac[1][1] = 0;
    fi[2] = -x[1]-1;
    jac[2][0] = 0;
    jac[2][1] = -1;
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = 2*|x0|+|x1|
        //
        // subject to combination of equality and inequality constraints
        //
        //      x0  =  1
        //      x1 >= -1
        //
        // using nonsmooth nonlinear optimizer. Although these constraints
        // are linear, we treat them as general nonlinear ones in order to
        // demonstrate nonlinearly constrained optimization setup.
        //
        real_1d_array x0 = "[1,1]";
        real_1d_array s = "[1,1]";
        double epsx = 0.00001;
        double radius = 0.1;
        double rho = 50.0;
        ae_int_t maxits = 0;
        minnsstate state;
        minnsreport rep;
        real_1d_array x1;

        //
        // Create optimizer object, choose AGS algorithm and tune its settings:
        // * radius=0.1     good initial value; will be automatically decreased later.
        // * rho=50.0       penalty coefficient for nonlinear constraints. It is your
        //                  responsibility to choose good one - large enough that it
        //                  enforces constraints, but small enough in order to avoid
        //                  extreme slowdown due to ill-conditioning.
        // * epsx=0.000001  stopping conditions
        // * s=[1,1]        all variables have unit scale
        //
        minnscreate(2, x0, state);
        minnssetalgoags(state, radius, rho);
        minnssetcond(state, epsx, maxits);
        minnssetscale(state, s);

        //
        // Set general nonlinear constraints.
        //
        // This part is more tricky than working with box/linear constraints - you
        // can not "pack" general nonlinear function into double precision array.
        // That's why minnssetnlc() does not accept constraints itself - only
        // constraint COUNTS are passed: first parameter is number of equality
        // constraints, second one is number of inequality constraints.
        //
        // As for constraining functions - these functions are passed as part
        // of problem Jacobian (see below).
        //
        // NOTE: MinNS optimizer supports arbitrary combination of boundary, general
        //       linear and general nonlinear constraints. This example does not
        //       show how to work with general linear constraints, but you can
        //       easily find it in documentation on minnlcsetlc() function.
        //
        minnssetnlc(state, 1, 1);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function and its Jacobian, with first
        // component (Jacobian row) being target function, and next components
        // (Jacobian rows) being nonlinear equality and inequality constraints
        // (box/linear ones are passed separately by means of minnssetbc() and
        // minnssetlc() calls).
        //
        // Nonlinear equality constraints have form Gi(x)=0, inequality ones
        // have form Hi(x)<=0, so we may have to "normalize" constraints prior
        // to passing them to optimizer (right side is zero, constraints are
        // sorted, multiplied by -1 when needed).
        //
        // So, our vector function has form
        //
        //     {f0,f1,f2} = { 2*|x0|+|x1|,  x0-1, -x1-1 }
        //
        // with Jacobian
        //
        //         [ 2*sign(x0)   sign(x1) ]
        //     J = [     1           0     ]
        //         [     0          -1     ]
        //
        // which means that we have optimization problem
        //
        //     min{f0} subject to f1=0, f2<=0
        //
        // which is essentially same as
        //
        //     min { 2*|x0|+|x1| } subject to x0=1, x1>=-1
        //
        // NOTE: AGS solver used by us can handle nonsmooth and nonconvex
        //       optimization problems. It has convergence guarantees, i.e. it will
        //       converge to stationary point of the function after running for some
        //       time.
        //
        //       However, it is important to remember that "stationary point" is not
        //       equal to "solution". If your problem is convex, everything is OK.
        //       But nonconvex optimization problems may have "flat spots" - large
        //       areas where gradient is exactly zero, but function value is far away
        //       from optimal. Such areas are stationary points too, and optimizer
        //       may be trapped here.
        //
        //       "Flat spots" are nonsmooth equivalent of the saddle points, but with
        //       orders of magnitude worse properties - they may be quite large and
        //       hard to avoid. All nonsmooth optimizers are prone to this kind of the
        //       problem, because it is impossible to automatically distinguish "flat
        //       spot" from true solution.
        //
        //       This note is here to warn you that you should be very careful when
        //       you solve nonsmooth optimization problems. Visual inspection of
        //       results is essential.
        //
        alglib::minnsoptimize(state, nsfunc2_jac);
        minnsresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [1.0000,0.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  nsfunc1_jac(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr)
{
    //
    // this callback calculates
    //
    //     f0(x0,x1) = 2*|x0|+x1
    //
    // and Jacobian matrix J = [df0/dx0 df0/dx1]
    //
    fi[0] = 2*fabs(double(x[0]))+fabs(double(x[1]));
    jac[0][0] = 2*alglib::sign(x[0]);
    jac[0][1] = alglib::sign(x[1]);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of
        //
        //     f(x0,x1) = 2*|x0|+|x1|
        //
        // using nonsmooth nonlinear optimizer.
        //
        real_1d_array x0 = "[1,1]";
        real_1d_array s = "[1,1]";
        double epsx = 0.00001;
        double radius = 0.1;
        double rho = 0.0;
        ae_int_t maxits = 0;
        minnsstate state;
        minnsreport rep;
        real_1d_array x1;

        //
        // Create optimizer object, choose AGS algorithm and tune its settings:
        // * radius=0.1     good initial value; will be automatically decreased later.
        // * rho=0.0        penalty coefficient for nonlinear constraints; can be zero
        //                  because we do not have such constraints
        // * epsx=0.000001  stopping conditions
        // * s=[1,1]        all variables have unit scale
        //
        minnscreate(2, x0, state);
        minnssetalgoags(state, radius, rho);
        minnssetcond(state, epsx, maxits);
        minnssetscale(state, s);

        //
        // Optimize and test results.
        //
        // Optimizer object accepts vector function and its Jacobian, with first
        // component (Jacobian row) being target function, and next components
        // (Jacobian rows) being nonlinear equality and inequality constraints
        // (box/linear ones are passed separately by means of minnssetbc() and
        // minnssetlc() calls).
        //
        // If you do not have nonlinear constraints (exactly our situation), then
        // you will have one-component function vector and 1xN Jacobian matrix.
        //
        // So, our vector function has form
        //
        //     {f0} = { 2*|x0|+|x1| }
        //
        // with Jacobian
        //
        //         [                       ]
        //     J = [ 2*sign(x0)   sign(x1) ]
        //         [                       ]
        //
        // NOTE: nonsmooth optimizer requires considerably more function
        //       evaluations than smooth solver - about 2N times more. Using
        //       numerical differentiation introduces additional (multiplicative)
        //       2N speedup.
        //
        //       It means that if smooth optimizer WITH user-supplied gradient
        //       needs 100 function evaluations to solve 50-dimensional problem,
        //       then AGS solver with user-supplied gradient will need about 10.000
        //       function evaluations, and with numerical gradient about 1.000.000
        //       function evaluations will be performed.
        //
        // NOTE: AGS solver used by us can handle nonsmooth and nonconvex
        //       optimization problems. It has convergence guarantees, i.e. it will
        //       converge to stationary point of the function after running for some
        //       time.
        //
        //       However, it is important to remember that "stationary point" is not
        //       equal to "solution". If your problem is convex, everything is OK.
        //       But nonconvex optimization problems may have "flat spots" - large
        //       areas where gradient is exactly zero, but function value is far away
        //       from optimal. Such areas are stationary points too, and optimizer
        //       may be trapped here.
        //
        //       "Flat spots" are nonsmooth equivalent of the saddle points, but with
        //       orders of magnitude worse properties - they may be quite large and
        //       hard to avoid. All nonsmooth optimizers are prone to this kind of the
        //       problem, because it is impossible to automatically distinguish "flat
        //       spot" from true solution.
        //
        //       This note is here to warn you that you should be very careful when
        //       you solve nonsmooth optimization problems. Visual inspection of
        //       results is essential.
        //
        alglib::minnsoptimize(state, nsfunc1_jac);
        minnsresults(state, x1, rep);
        printf("%s\n", x1.tostring(2).c_str()); // EXPECTED: [0.0000,0.0000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

minqpreport
minqpstate
minqpaddlc2
minqpaddlc2dense
minqpaddlc2sparsefromdense
minqpaddpowccorthogonal
minqpaddpowccprimitive
minqpaddqc2
minqpaddqc2dense
minqpaddqc2list
minqpaddsoccorthogonal
minqpaddsoccprimitive
minqpclearcc
minqpclearqc
minqpcreate
minqpexport
minqpimport
minqpoptimize
minqpresults
minqpresultsbuf
minqpsetalgodenseaul
minqpsetalgodensegenipm
minqpsetalgodenseipm
minqpsetalgoquickqp
minqpsetalgosparseecqp
minqpsetalgosparsegenipm
minqpsetalgosparseipm
minqpsetbc
minqpsetbcall
minqpsetbci
minqpsetlc
minqpsetlc2
minqpsetlc2dense
minqpsetlc2mixed
minqpsetlcmixed
minqpsetlcmixedlegacy
minqpsetlcsparse
minqpsetlinearterm
minqpsetorigin
minqpsetquadraticterm
minqpsetquadratictermsparse
minqpsetscale
minqpsetscaleautodiag
minqpsetstartingpoint
minqp_conic L1-penalized quadratic programming using conic constraints
minqp_d_bc1 Box constrained dense quadratic programming
minqp_d_lc1 Linearly constrained dense quadratic programming
minqp_d_nonconvex Nonconvex quadratic programming
minqp_d_u1 Unconstrained dense quadratic programming
minqp_d_u2 Unconstrained sparse quadratic programming
/************************************************************************* This structure stores optimization report: * InnerIterationsCount number of inner iterations * OuterIterationsCount number of outer iterations * NCholesky number of Cholesky decomposition * NMV number of matrix-vector products (only products calculated as part of iterative process are counted) * TerminationType completion code (see below) * F for positive terminationtype stores quadratic model value at the solution * LagBC Lagrange multipliers for box constraints, array[N] * LagLC Lagrange multipliers for linear constraints, array[MSparse+MDense] * LagQC Lagrange multipliers for quadratic constraints === COMPLETION CODES ===================================================== Completion codes: * -9 failure of the automatic scale evaluation: one of the diagonal elements of the quadratic term is non-positive. Specify variable scales manually! * -5 inappropriate solver was used: * QuickQP solver for a problem with general linear constraints (dense/sparse) * QuickQP/DENSE-AUL/DENSE-IPM/SPARSE-IPM for a problem with quadratic/conic constraints * ECQP for a problem with inequality or nonlinear equality constraints * -4 the problem is highly likely to be unbounded; either one of the solvers found an unconstrained direction of negative curvature, or objective simply decreased for too much (more than 1E50). * -3 inconsistent constraints (or, maybe, feasible point is too hard to find). If you are sure that constraints are feasible, try to restart optimizer with better initial approximation. * -2 IPM solver has difficulty finding primal/dual feasible point. It is likely that the problem is either infeasible or unbounded, but it is difficult to determine exact reason for termination. X contains best point found so far. * 1..4 successful completion * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. === LAGRANGE MULTIPLIERS ================================================= Some optimizers report values of Lagrange multipliers on successful completion (positive completion code): * dense and sparse IPM/GENIPM return very precise Lagrange multipliers as determined during solution process. * DENSE-AUL-QP returns approximate Lagrange multipliers (which are very close to "true" Lagrange multipliers except for overconstrained or degenerate problems) Three arrays of multipliers are returned: * LagBC is array[N] which is loaded with multipliers from box constraints; LagBC[i]>0 means that I-th constraint is at the upper bound, LagBC[I]<0 means that I-th constraint is at the lower bound, LagBC[I]=0 means that I-th box constraint is inactive. * LagLC is array[MSparse+MDense] which is loaded with multipliers from general linear constraints (former MSparse elements corresponds to sparse part of the constraint matrix, latter MDense are for the dense constraints, as was specified by user). LagLC[i]>0 means that I-th constraint at the upper bound, LagLC[i]<0 means that I-th constraint is at the lower bound, LagLC[i]=0 means that I-th linear constraint is inactive. * LagQC is array[MQC] which stores multipliers for quadratic constraints. LagQC[i]>0 means that I-th constraint at the upper bound, LagQC[i]<0 means that I-th constraint is at the lower bound, LagQC[i]=0 means that I-th linear constraint is inactive. On failure (or when optimizer does not support Lagrange multipliers) these arrays are zero-filled. It is expected that at solution the dual feasibility condition holds: C+H*(Xs-X0) + SUM(Ei*LagBC[i],i=0..n-1) + SUM(Ai*LagLC[i],i=0..m-1) + ... ~ 0 where * C is a linear term * H is a quadratic term * Xs is a solution, and X0 is an origin term (zero by default) * Ei is a vector with 1.0 at position I and 0 in other positions * Ai is an I-th row of linear constraint matrix NOTE: methods from IPM family may also return meaningful Lagrange multipliers on completion with code -2 (infeasibility or unboundedness detected). *************************************************************************/
class minqpreport { public: minqpreport(); minqpreport(const minqpreport &rhs); minqpreport& operator=(const minqpreport &rhs); virtual ~minqpreport(); ae_int_t inneriterationscount; ae_int_t outeriterationscount; ae_int_t nmv; ae_int_t ncholesky; ae_int_t terminationtype; double f; real_1d_array lagbc; real_1d_array laglc; real_1d_array lagqc; };
/************************************************************************* This object stores nonlinear optimizer state. You should use functions provided by MinQP subpackage to work with this object *************************************************************************/
class minqpstate { public: minqpstate(); minqpstate(const minqpstate &rhs); minqpstate& operator=(const minqpstate &rhs); virtual ~minqpstate(); };
/************************************************************************* This function appends two-sided linear constraint AL <= A*x <= AU to the list of currently present sparse constraints. Constraint is passed in compressed format: as list of non-zero entries of coefficient vector A. Such approach is more efficient than dense storage for highly sparse constraint vectors. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. IdxA - array[NNZ], indexes of non-zero elements of A: * can be unsorted * can include duplicate indexes (corresponding entries of ValA[] will be summed) ValA - array[NNZ], values of non-zero elements of A NNZ - number of non-zero coefficients in A AL, AU - lower and upper bounds; * AL=AU => equality constraint A*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint A*x<=AU * AU=+INF => one-sided constraint AL<=A*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minqpaddlc2(minqpstate &state, const integer_1d_array &idxa, const real_1d_array &vala, const ae_int_t nnz, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends two-sided linear constraint AL <= A*x <= AU to the matrix of currently present dense constraints. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. A - linear constraint coefficient, array[N], right side is NOT included. AL, AU - lower and upper bounds; * AL=AU => equality constraint Ai*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint Ai*x<=AU * AU=+INF => one-sided constraint AL<=Ai*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minqpaddlc2dense(minqpstate &state, const real_1d_array &a, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends two-sided linear constraint AL <= A*x <= AU to the list of currently present sparse constraints. Constraint vector A is passed as a dense array which is internally sparsified by this function. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. DA - array[N], constraint vector AL, AU - lower and upper bounds; * AL=AU => equality constraint A*x * AL<AU => two-sided constraint AL<=A*x<=AU * AL=-INF => one-sided constraint A*x<=AU * AU=+INF => one-sided constraint AL<=A*x * AL=-INF, AU=+INF => constraint is ignored -- ALGLIB -- Copyright 19.07.2018 by Bochkanov Sergey *************************************************************************/
void minqpaddlc2sparsefromdense(minqpstate &state, const real_1d_array &da, const double al, const double au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends an axis-orthogonal power cone constraint of the form ( k-kp-1 ) k-1 sqrt ( theta^2 + SUM y[i]^2 ) <= MUL y[i]^alpha[i] ( i=0 ) i=k-kp where y[i] = a[i]*x[idx[i]]+c[i], y[i]>=0 0<alpha[i]<1, with SUM(alpha[i])<=1 1<=kp<=k, with kp=k meaning that we have MUL(y[i]^alpha[i])>=|theta| Alternatively, if ApplyOrigin parameter is True, x[i] is replaced by x[i]-origin[i] (applies to all variables). Unlike many other conic solvers, ALGLIB provides a flexible conic API that allows alpha[] to sum up to any positive value less than or equal to 1 (e.g. it is possible to formulate |x|<z^0.33 without using slack vars). Furthermore, ALGLIB allows conic constraints to overlap, i.e. it allows a variable to be a part of multiple conic constraints, or to appear multiple times in the same constraint. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. Idx - array[K] (or larger, only leading K elements are used) storing variable indexes. Indexes can be unsorted and/or non-distinct. A - array[K] (or larger, only leading K elements are used), variable multipliers. Can contain zero values. C - array[K] (or larger, only leading K elements are used), variable shifts. K - cone dimensionality, K>=1. It is possible to have K>N. Theta - additional constant term, can be zero AlphaV - array[KPow], power coefficients: * 0<AlphaV[i]<=1 * 0<SUM(AlphaV[])<=1 KPow - 1<=KPow<=K, with KPow=K being correctly handled. RESULT: constraint index in a conic constraints list, starting from 0 NOTE: power cone constraints are always convex, so having them preserves convexity of the QP problem. NOTE: A starting point that is strictly feasible with respect to both box and conic constraints greatly helps the solver to power up; however, it will work even without such a point, albeit at somewhat lower performance. NOTE: a power cone with alpha<1 is sensitive to numerical errors near the origin. Suppose, for example, that we have a constraint of the form |y|<=x^alpha with alpha=0.25 and that x is zero at the solution Furthermore, suppose that we perturbed x by as little as eps=1E-8. Because of alpha=0.25 the constraint is perturbed by as much as eps^(1/4)=0.01! Such great sensitivity is explained by the non-differentiability of x^alpha for alpha<1 near x=0. It does not prevent the conic solver from converging to the precise solution, it merely makes constraints extremely sensitive to small errors, e.g. ones introduced during the presolve/postsolve, like discussed below. The conic solver sometimes has to insert slack variables, most often because of constraints referring the same right-hand side variable twice. For example, for sqrt(x0^2+x1^2)<=(y0-1)^0.5*(y0+1)^0.5 it will automatically add t=y0+1 (a linear equality constraint) and will rewrite the constraint as sqrt(x0^2+x1^2)<=(y0-1)^0.5*t^0.5. The solver will have no difficulty solving the problem, even if the optimal t is close to zero. However, the equality y0=t-1 will be satisfied only approximately, and even a tiny error will be greatly magnified when evaluating the constraint violation. Thus, one has to be very careful when evaluating constraint violation errors for power cones. Having high error does not necessarily mean that the solver has failed. -- ALGLIB -- Copyright 09.09.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t minqpaddpowccorthogonal(minqpstate &state, const integer_1d_array &idx, const real_1d_array &a, const real_1d_array &c, const ae_int_t k, const double theta, const real_1d_array &alphav, const ae_int_t kpow, const bool applyorigin, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function appends a primitive power cone constraint of the form ( ) sqrt(x[range0]^2 + x[range0+1]^2 + ... + x[range1-1]^2 ) <= x[axisidx]^alpha ( ) or, written in another form, ( ( ))^(1/alpha) (sqrt(x[range0]^2 + x[range0+1]^2 + ... + x[range1-1]^2 )) <= x[axisidx] ( ( )) where 0<alpha<=1 with 'primitive' meaning that there are no per-variable scales and that variables under the square root have sequential indexes. More general form of power cone constraints can be specified with minqpaddpowccorthogonal(). Alternatively, if ApplyOrigin parameter is True, x[i] is replaced by x[i]-origin[i] (applies to all variables). Unlike many other conic solvers, ALGLIB provides a flexible conic API that allows alpha to be any positive value less than or equal to 1 (e.g. it is possible to formulate |x|<z^0.33 without using slack vars that are fixed at 1.0). Furthermore, ALGLIB allows conic constraints to overlap, i.e. it allows a variable to be a part of multiple conic constraints. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. Range0, Range1 - 0<=range0<=range1<=N, variable range for the LHS; * squared variables x[range0]...x[range1-1] are summed up under the square root. * range0=range1 means that the constraint is interpreted as x[AxisIdx]>=0. AxisIdx - RHS variable index: * 0<=AxisIdx<N * either AxisIdx<range0 or AxisAdx>=Range1. Alpha - power parameter, 0<alpha<=1 RESULT: constraint index in a conic constraints list, starting from 0 NOTE: power cone constraints are always convex, so having them preserves convexity of the QP problem. NOTE: A starting point that is strictly feasible with respect to both box and conic constraints greatly helps the solver to power up; however, it will work even without such a point, albeit at somewhat lower performance. NOTE: a power cone with alpha<1 is sensitive to numerical errors near the origin. Suppose, for example, that we have a constraint of the form |y|<=x^alpha with alpha=0.25 and that x is zero at the solution Furthermore, suppose that we perturbed x by as little as eps=1E-8. Because of alpha=0.25 the constraint is perturbed by as much as eps^(1/4)=0.01! Such great sensitivity is explained by the non-differentiability of x^alpha for alpha<1 near x=0. It does not prevent the conic solver from converging to the precise solution, it merely makes constraints extremely sensitive to small errors, e.g. ones introduced during the presolve/postsolve, like discussed below. The conic solver sometimes has to insert slack variables, most often because of constraints referring the same right-hand side variable twice. For example, for sqrt(x0^2+x1^2)<=(y0-1)^0.5*(y0+1)^0.5 it will automatically add t=y0+1 (a linear equality constraint) and will rewrite the constraint as sqrt(x0^2+x1^2)<=(y0-1)^0.5*t^0.5. The solver will have no difficulty solving the problem, even if the optimal t is close to zero. However, the equality y0=t-1 will be satisfied only approximately, and even a tiny error will be greatly magnified when evaluating the constraint violation. Thus, one has to be very careful when evaluating constraint violation errors for power cones. Having high error does not necessarily mean that the solver has failed. -- ALGLIB -- Copyright 19.11.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t minqpaddpowccprimitive(minqpstate &state, const ae_int_t range0, const ae_int_t range1, const ae_int_t axisidx, const double alpha, const bool applyorigin, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function appends a two-sided quadratic constraint of the form CL <= b'x + 0.5*x'*Q*x <= CU or (depending on the ApplyOrigin parameter) CL <= b'(x-origin) + 0.5*(x-origin)'*Q*(x-origin) <= CU to the set of currently present constraints. The linear term is given by a dense array, the quadratic term is given by a sparse array. Here CL can be finite or -INF (absense of constraint), CU can be finite or +INF (absense of constraint), CL<=CU, with CL=CU denoting an equality constraint. Q is an arbitrary (including indefinite) symmetric matrix. The function has O(max(N,NNZ)) memory and time requirements because a dense array is used to store linear term and because most sparse matrix storage formats supported by ALGLIB need at least O(N) memory even for an empty quadratic constraint matrix. Use minqpaddqc2list() if you have to add many constraints with much less than N nonzero elements. IMPORTANT: ALGLIB supports arbitrary quadratic constraints, including nonconvex ones. However, only convex constraints (combined with the convex objective) result in guaranteed convergence to the global minimizer. In all other cases, only local convergence to a local minimum is guaranteed. A convex constraint is a constraint of the following form: b'*(x-origin) + 0.5(x-origin)'*Q*(x-origin) <=CU, with Q being a semidefinite matrix. All other modifications are nonconvex: * -x0^2<=1 is nonconvex * x0^2>=1 is nonconvex (despite Q=1 being positive definite) * x0^2 =1 is nonconvex The latter case is notable because it effectively converts a QP problem into a mixed integer QP program. Smooth interior point solver can not efficiently handle such programs, converging to a randomly chosen x0 (either +1 or -1) and keeping its value fixed during the optimization. It is also notable that larger equality constraints (e.g. x0^2+x1^2=1) are much less difficult to handle because they form large connected regions within the parameters space. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. Q - symmetric matrix Q in a sparse matrix storage format: * if IsUpper=True, then the upper triangle is given, and the lower triangle is ignored * if IsUpper=False, then the lower triangle is given, and the upper triangle is ignored * any sparse matrix storage format present in ALGLIB is supported * the matrix must be exactly NxN IsUpper - whether upper or lower triangle of Q is used B - array[N], linear term CL, CU - lower and upper bounds: * CL can be finite or -INF (absence of a bound) * CU can be finite or +INF (absence of a bound) * CL<=CU, with CL=CU meaning an equality constraint * CL=-INF, CU=+INF => constraint is ignored ApplyOrigin-whether origin (as specified by minqpsetorigin) is applied to the constraint or not. If no origin was specified, this parameter has no effect. RESULT: constraint index, starting from 0 -- ALGLIB -- Copyright 19.07.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t minqpaddqc2(minqpstate &state, const sparsematrix &q, const bool isupper, const real_1d_array &b, const double cl, const double cu, const bool applyorigin, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends a two-sided quadratic constraint of the form CL <= b'x + 0.5*x'*Q*x <= CU or (depending on the ApplyOrigin parameter) CL <= b'(x-origin) + 0.5*(x-origin)'*Q*(x-origin) <= CU to the set of currently present constraints. The linear and quadratic terms are given by dense arrays. Here CL can be finite or -INF (absense of constraint), CU can be finite or +INF (absense of constraint), CL<=CU, with CL=CU denoting an equality constraint. Q is an arbitrary (including indefinite) symmetric matrix. This function trades convenience of using dense arrays for the efficiency. Because dense NxN storage is used, merely calling this function has O(N^2) complexity, no matter how sparse the Q is. Use minqpaddqc2() or minqpaddqc2list() if you have sparse Q and/or many constraints to handle. IMPORTANT: ALGLIB supports arbitrary quadratic constraints, including nonconvex ones. However, only convex constraints (combined with the convex objective) result in guaranteed convergence to the global minimizer. In all other cases, only local convergence to a local minimum is guaranteed. A convex constraint is a constraint of the following form: b'*(x-origin) + 0.5(x-origin)'*Q*(x-origin) <=CU, with Q being a semidefinite matrix. All other modifications are nonconvex: * -x0^2<=1 is nonconvex * x0^2>=1 is nonconvex (despite Q=1 being positive definite) * x0^2 =1 is nonconvex The latter case is notable because it effectively converts a QP problem into a mixed integer QP program. Smooth interior point solver can not efficiently handle such programs, converging to a randomly chosen x0 (either +1 or -1) and keeping its value fixed during the optimization. It is also notable that larger equality constraints (e.g. x0^2+x1^2=1) are much less difficult to handle because they form large connected regions within the parameters space. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. Q - array[N,N], symmetric matrix Q: * if IsUpper=True, then the upper triangle is given, and the lower triangle is ignored * if IsUpper=False, then the lower triangle is given, and the upper triangle is ignored * if more than N rows/cols are present, only leading N elements are used IsUpper - whether upper or lower triangle of Q is used B - array[N], linear term CL, CU - lower and upper bounds: * CL can be finite or -INF (absence of a bound) * CU can be finite or +INF (absence of a bound) * CL<=CU, with CL=CU meaning an equality constraint * CL=-INF, CU=+INF => constraint is ignored ApplyOrigin-whether origin (as specified by minqpsetorigin) is applied to the constraint or not. If no origin was specified, this parameter has no effect. RESULT: constraint index, starting from 0 -- ALGLIB -- Copyright 19.06.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t minqpaddqc2dense(minqpstate &state, const real_2d_array &q, const bool isupper, const real_1d_array &b, const double cl, const double cu, const bool applyorigin, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends a two-sided quadratic constraint of the form CL <= b'x + 0.5*x'*Q*x <= CU or (depending on the ApplyOrigin parameter) CL <= b'(x-origin) + 0.5*(x-origin)'*Q*(x-origin) <= CU to the set of currently present constraints. Both linear and quadratic terms are given as lists of non-zero entries. Here CL can be finite or -INF (absense of constraint), CU can be finite or +INF (absense of constraint), CL<=CU, with CL=CU denoting an equality constraint. Q is an arbitrary (including indefinite) symmetric matrix. The function needs O(NNZ) memory for temporaries and O(NNZ*logNNZ) time, where NNZ is a total number of non-zeros in both lists. For small constraints it can be orders of magnitude faster than minqpaddqc2() with its O(max(N,NNZ)) temporary memory or minqpaddqc2dense() with its O(N^2) temporaries. Thus, it is recommended if you have many small constraints. NOTE: in the end, all quadratic constraints are stored in the same memory-efficient compressed format. However, you have to allocate an NxN temporary dense matrix when you pass a constraint using minqpaddqc2dense(). Similarly, data structures used as a part of the API provided by minqpaddqc2() have O(N) temporary memory requirements. IMPORTANT: ALGLIB supports arbitrary quadratic constraints, including nonconvex ones. However, only convex constraints (combined with the convex objective) result in guaranteed convergence to the global minimizer. In all other cases, only local convergence to a local minimum is guaranteed. A convex constraint is a constraint of the following form: b'*(x-origin) + 0.5(x-origin)'*Q*(x-origin) <=CU, with Q being a semidefinite matrix. All other modifications are nonconvex: * -x0^2<=1 is nonconvex * x0^2>=1 is nonconvex (despite Q=1 being positive definite) * x0^2 =1 is nonconvex The latter case is notable because it effectively converts a QP problem into a mixed integer QP program. Smooth interior point solver can not efficiently handle such programs, converging to a randomly chosen x0 (either +1 or -1) and keeping its value fixed during the optimization. It is also notable that larger equality constraints (e.g. x0^2+x1^2=1) are much less difficult to handle because they form large connected regions within the parameters space. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. QRIdx - array[QNNZ], stores row indexes of QNNZ nonzero elements of a symmetric matrix Q QCIdx - array[QNNZ], stores col indexes of QNNZ nonzero elements of a symmetric matrix Q QVals - array[QNNZ], stores values of QNNZ nonzero elements of a symmetric matrix Q QNNZ - number of non-zero elements in Q, QNNZ>=0 IsUpper - whether upper or lower triangle of Q is used: * if IsUpper=True, then only elements with QRIdx[I]<=QCIdx[I] are used and the rest is ignored * if IsUpper=False, then only elements with QRIdx[I]>=QCIdx[I] are used and the rest is ignored BIdx - array[BNNZ], indexes of BNNZ nonzero elements of a linear term BVals - array[BNNZ], values of BNNZ nonzero elements of a linear term BNNZ - number of nonzero elements in B, BNNZ>=0 CL, CU - lower and upper bounds: * CL can be finite or -INF (absence of a bound) * CU can be finite or +INF (absence of a bound) * CL<=CU, with CL=CU meaning an equality constraint * CL=-INF, CU=+INF => constraint is ignored ApplyOrigin-whether origin (as specified by minqpsetorigin) is applied to the constraint or not. If no origin was specified, this parameter has no effect. RESULT: constraint index, starting from 0 -- ALGLIB -- Copyright 19.07.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t minqpaddqc2list(minqpstate &state, const integer_1d_array &qridx, const integer_1d_array &qcidx, const real_1d_array &qvals, const ae_int_t qnnz, const bool isupper, const integer_1d_array &bidx, const real_1d_array &bvals, const ae_int_t bnnz, const double cl, const double cu, const bool applyorigin, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends an axis-orthogonal second-order conic constraint of the form ( k-2 ( )^2 ) sqrt ( SUM ( a[i]*x[idx[i]]+c[i] ) + theta^2 ) <= a[k-1]*x[idx[k-1]]+c[k-1] ( i=0 ( ) ) Alternatively, if ApplyOrigin parameter is True, x[i] is replaced by x[i]-origin[i] (applies to all variables). Unlike many other conic solvers, ALGLIB provides a flexible conic API that allows a[] to have zero elements at arbitrary positions (e.g., |x|<=const can be handled just as easy as |x|<=y). Furthermore, ALGLIB allows conic constraints to overlap, i.e. it allows a variable to be a part of multiple conic constraints, or to appear multiple times in the same constraint. NOTE: second-order conic constraints are always convex, so having them preserves convexity of the QP problem. NOTE: A starting point that is strictly feasible with respect to both box and conic constraints greatly helps the solver to power up; however, it will work even without such a point, albeit at somewhat lower performance. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. Idx - array[K] (or larger, only leading K elements are used) storing variable indexes. Indexes can be unsorted and/or non-distinct. A - array[K] (or larger, only leading K elements are used), variable multipliers. Can contain zero values. C - array[K] (or larger, only leading K elements are used), variable shifts. K - cone dimensionality, K>=1. It is possible to have K>N. Theta - additional constant term, can be zero RESULT: constraint index in a conic constraints list, starting from 0 -- ALGLIB -- Copyright 09.09.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t minqpaddsoccorthogonal(minqpstate &state, const integer_1d_array &idx, const real_1d_array &a, const real_1d_array &c, const ae_int_t k, const double theta, const bool applyorigin, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function appends a primitive second-order conic constraint of the form ( ) sqrt(x[range0]^2 + x[range0+1]^2 + ... + x[range1-1]^2 ) <= x[axisidx] ( ) with 'primitive' meaning that there are no per-variable scales and that variables under the square root have sequential indexes. More general form of conic constraints can be specified with minqpaddsoccorthogonal(). Alternatively, if ApplyOrigin parameter is True, x[i] is replaced by x[i]-origin[i] (applies to all variables). Unlike many other conic solvers, ALGLIB allows conic constraints to overlap, i.e. it allows a variable to be a part of multiple conic constraints. NOTE: second-order conic constraints are always convex, so having them preserves convexity of the QP problem. NOTE: A starting point that is strictly feasible with respect to both box and conic constraints greatly helps the solver to power up; however, it will work even without such a point, albeit at somewhat lower performance. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. Range0, Range1 - 0<=range0<=range1<=N, variable range for the LHS; * squared variables x[range0]...x[range1-1] are summed up under the square root. * range0=range1 means that the constraint is interpreted as x[AxisIdx]>=0. AxisIdx - RHS variable index: * 0<=AxisIdx<N * either AxisIdx<range0 or AxisAdx>=Range1. RESULT: constraint index in a conic constraints list, starting from 0 -- ALGLIB -- Copyright 09.09.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t minqpaddsoccprimitive(minqpstate &state, const ae_int_t range0, const ae_int_t range1, const ae_int_t axisidx, const bool applyorigin, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function clears the list of conic constraints. Other constraints are not modified. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. -- ALGLIB -- Copyright 19.06.2024 by Bochkanov Sergey *************************************************************************/
void minqpclearcc(minqpstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function clears the list of quadratic constraints. Other constraints are not modified. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. -- ALGLIB -- Copyright 19.06.2024 by Bochkanov Sergey *************************************************************************/
void minqpclearqc(minqpstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* CONSTRAINED QUADRATIC PROGRAMMING The subroutine creates QP optimizer. After initial creation, it contains default optimization problem with zero quadratic and linear terms and no constraints. In order to actually solve something you should: specify objective: * set linear term with minqpsetlinearterm() * set quadratic term with minqpsetquadraticterm() or minqpsetquadratictermsparse() specify constraints: * set variable bounds with minqpsetbc() or minqpsetbcall() * specify linear constraint matrix with one of the following functions: * modern API: * minqpsetlc2() for sparse two-sided constraints AL <= A*x <= AU * minqpsetlc2dense() for dense two-sided constraints AL <= A*x <= AU * minqpsetlc2mixed() for mixed two-sided constraints AL <= A*x <= AU * minqpaddlc2dense() to add one dense row to the dense constraint submatrix * minqpaddlc2() to add one sparse row to the sparse constraint submatrix * minqpaddlc2sparsefromdense() to add one sparse row (passed as a dense array) to the sparse constraint submatrix * legacy API: * minqpsetlc() for dense one-sided equality/inequality constraints * minqpsetlcsparse() for sparse one-sided equality/inequality constraints * minqpsetlcmixed() for mixed dense/sparse one-sided equality/inequality constraints * add two-sided quadratic constraint(s) of the form CL <= b'x+0.5*x'Qx <= CU with one of the following functions: * minqpaddqc2() for a quadratic constraint given by a sparse matrix structure; has O(max(N,NNZ)) memory and running time requirements. * minqpaddqc2dense() for a quadratic constraint given by a dense matrix; has O(N^2) memory and running time requirements. * minqpaddqc2list() for a sparse quadratic constraint given by a list of non-zero entries; has O(NNZ) memory and O(NNZ*logNNZ) running time requirements, ideal for constraints with much less than N nonzero elements. * add second order cone constraints with: * minqpaddsoccprimitive() for a primitive second order cone constraint * minqpaddsoccorthogonal() for an axis-orthogonal second order cone constraint * add power cone constraints with: * minqpaddpowccprimitive() for a primitive power cone constraint * minqpaddpowccorthogonal() for an axis-orthogonal power cone constraint configure and run QP solver: * choose appropriate QP solver and set it and its stopping criteria by means of minqpsetalgo??????() function * call minqpoptimize() to run the solver and minqpresults() to get the solution vector and additional information. Following solvers are recommended for convex and semidefinite problems with box and linear constraints: * QuickQP for dense problems with box-only constraints (or no constraints at all) * DENSE-IPM-QP for convex or semidefinite problems with medium (up to several thousands) variable count, dense/sparse quadratic term and any number (up to many thousands) of dense/sparse general linear constraints * SPARSE-IPM-QP for convex or semidefinite problems with large (many thousands) variable count, sparse quadratic term AND linear constraints. * SPARSE-ECQP for convex having only linear equality constraints. This specialized solver can be orders of magnitude faster than IPM. If your problem happens to be nonconvex or has nonlinear constraints, then you can use: * DENSE-GENIPM or SPARSE-GENIPM solver which supports convex/nonconvex QP problems with box, linear, quadratic equality/inequality and conic constraints. * QuickQP for small dense nonconvex problems with box-only constraints INPUT PARAMETERS: N - problem size OUTPUT PARAMETERS: State - optimizer with zero quadratic/linear terms and no constraints -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpcreate(const ae_int_t n, minqpstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* Export current QP problem stored in the solver into QPXProblem instance. This instance can be serialized into ALGLIB-specific format and unserialized from several widely acnowledged formats. INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: P - QPXProblem instance storing current objective and constraints. -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void minqpexport(minqpstate &state, qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Imports QP problem, as defined by QPXProblem instance, creating a QP solver with objective/constraints/scales/origin set to that stored in the instance. INPUT PARAMETERS: P - QPXProblem instance storing current objective and constraints. OUTPUT PARAMETERS: State - newly created solver -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void minqpimport(qpxproblem &p, minqpstate &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function solves quadratic programming problem. Prior to calling this function you should choose solver by means of one of the following functions: * minqpsetalgoquickqp() - for QuickQP solver * minqpsetalgodenseaul() - for Dense-AUL-QP solver * minqpsetalgodenseipm() - for convex Dense-IPM-QP solver * minqpsetalgosparseipm() - for convex Sparse-IPM-QP solver * minqpsetalgodensegenipm() - for convex/nonconvex Dense-IPM-QP solver with conic constraints * minqpsetalgosparsegenipm()- for convex/nonconvex Sparse-IPM-QP solver with conic constraints These functions also allow you to control stopping criteria of the solver. If you did not set solver, MinQP subpackage will automatically select solver for your problem and will run it with default stopping criteria. However, it is better to set explicitly solver and its stopping criteria. INPUT PARAMETERS: State - algorithm state You should use MinQPResults() function to access results after calls to this function. -- ALGLIB -- Copyright 2011-2024 by Bochkanov Sergey. Special thanks to Elvira Illarionova for important suggestions on the linearly constrained QP algorithm. *************************************************************************/
void minqpoptimize(minqpstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* QP solver results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[0..N-1], solution (on failure - the best point found so far). Rep - optimization report, contains: * completion code in Rep.TerminationType (positive values denote some kind of success, negative - failures) * Lagrange multipliers - for QP solvers which support them * other statistics See comments on minqpreport structure for more information Following completion codes are returned in Rep.TerminationType: * -9 failure of the automatic scale evaluation: one of the diagonal elements of the quadratic term is non-positive. Specify variable scales manually! * -5 inappropriate solver was used: * QuickQP solver for problem with general linear constraints * QuickQP/DENSE-AUL/DENSE-IPM/SPARSE-IPM for a problem with quadratic/conic constraints * -4 the function is unbounded from below even under constraints, no meaningful minimum can be found. * -3 inconsistent constraints (or, maybe, feasible point is too hard to find). * -2 IPM solver has difficulty finding primal/dual feasible point. It is likely that the problem is either infeasible or unbounded, but it is difficult to determine exact reason for termination. X contains best point found so far. * >0 success * 7 stopping conditions are too stringent, further improvement is impossible, X contains best point found so far. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpresults(const minqpstate &state, real_1d_array &x, minqpreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* QP results Buffered implementation of MinQPResults() which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpresultsbuf(const minqpstate &state, real_1d_array &x, minqpreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells QP solver to use DENSE-AUL algorithm and sets stopping criteria for the algorithm. This algorithm is intended for non-convex problems with moderate (up to several thousands) variable count and arbitrary number of constraints which are either (a) effectively convexified under constraints or (b) have unique solution even with nonconvex target. IMPORTANT: when DENSE-IPM solver is applicable, its performance is usually much better than that of DENSE-AUL. We recommend you to use DENSE-AUL only when other solvers can not be used. ALGORITHM FEATURES: * supports box and dense/sparse general linear equality/inequality constraints * convergence is theoretically proved for positive-definite (convex) QP problems. Semidefinite and non-convex problems can be solved as long as they are bounded from below under constraints, although without theoretical guarantees. ALGORITHM OUTLINE: * this algorithm is an augmented Lagrangian method with dense preconditioner (hence its name). * it performs several outer iterations in order to refine values of the Lagrange multipliers. Single outer iteration is a solution of some unconstrained optimization problem: first it performs dense Cholesky factorization of the Hessian in order to build preconditioner (adaptive regularization is applied to enforce positive definiteness), and then it uses L-BFGS optimizer to solve optimization problem. * typically you need about 5-10 outer iterations to converge to solution ALGORITHM LIMITATIONS: * because dense Cholesky driver is used, this algorithm has O(N^2) memory requirements and O(OuterIterations*N^3) minimum running time. From the practical point of view, it limits its applicability by several thousands of variables. From the other side, variables count is the most limiting factor, and dependence on constraint count is much more lower. Assuming that constraint matrix is sparse, it may handle tens of thousands of general linear constraints. INPUT PARAMETERS: State - structure which stores algorithm state EpsX - >=0, stopping criteria for inner optimizer. Inner iterations are stopped when step length (with variable scaling being applied) is less than EpsX. See minqpsetscale() for more information on variable scaling. Rho - penalty coefficient, Rho>0: * large enough that algorithm converges with desired precision. * not TOO large to prevent ill-conditioning * recommended values are 100, 1000 or 10000 ItsCnt - number of outer iterations: * recommended values: 10-15 (although in most cases it converges within 5 iterations, you may need a few more to be sure). * ItsCnt=0 means that small number of outer iterations is automatically chosen (10 iterations in current version). * ItsCnt=1 means that AUL algorithm performs just as usual penalty method. * ItsCnt>1 means that AUL algorithm performs specified number of outer iterations IT IS VERY IMPORTANT TO CALL minqpsetscale() WHEN YOU USE THIS ALGORITHM BECAUSE ITS CONVERGENCE PROPERTIES AND STOPPING CRITERIA ARE SCALE-DEPENDENT! NOTE: Passing EpsX=0 will lead to automatic step length selection (specific step length chosen may change in the future versions of ALGLIB, so it is better to specify step length explicitly). -- ALGLIB -- Copyright 20.08.2016 by Bochkanov Sergey *************************************************************************/
void minqpsetalgodenseaul(minqpstate &state, const double epsx, const double rho, const ae_int_t itscnt, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells QP solver to use DENSE-GENIPM QP algorithm and sets stopping criteria for the algorithm. This algorithm is intended for convex/nonconvex box/linearly/conically constrained QP problems with moderate (up to several thousands) variables count and arbitrary number of constraints. Use SPARSE-GENIPM if your problem is sparse. The algorithm is a generalization of DENSE-IPM solver, capable of handling more general constraints as well as nonconvexity of the target. In the latter case, a local solution is found. IMPORTANT: the commercial edition of ALGLIB can parallelize this function. See the ALGLIB Reference Manual for more information on how to activate parallelism support. ALGORITHM FEATURES: * supports box, linear equality/inequality and conic constraints * for convex problems returns the global (and the only) solution * can handle non-convex problem (only a locally optimal solution is returned in this case) ALGORITHM LIMITATIONS: * because a dense Cholesky driver is used, for N-dimensional problem with M dense constaints this algorithm has O(N^2+N*M) memory requirements and O(N^3+M*N^2) running time. Having sparse constraints with Z nonzeros per row relaxes storage and running time down to O(N^2+M*Z) and O(N^3+M*Z^2) From the practical point of view, it limits its applicability by several thousands of variables. From the other side, variables count is the most limiting factor, and dependence on constraint count is much more lower. Assuming that the constraint matrix is sparse, it may handle tens of thousands of general linear constraints. INPUT PARAMETERS: State - structure which stores algorithm state Eps - >=0, stopping criteria. The algorithm stops when primal and dual infeasiblities as well as complementarity gap are less than Eps. IT IS VERY IMPORTANT TO CALL minqpsetscale() WHEN YOU USE THIS ALGORITHM BECAUSE ITS CONVERGENCE PROPERTIES AND STOPPING CRITERIA ARE SCALE-DEPENDENT! NOTE: Passing EpsX=0 will lead to automatic selection of small epsilon. ===== TRACING GENIPM SOLVER ============================================== GENIPM solver supports advanced tracing capabilities. You can log algorithm output by specifying following trace symbols (case-insensitive) by means of trace_file() call: * 'GENIPM' - for basic trace of algorithm steps and decisions. Only short scalars (function values and deltas) are printed. N-dimensional quantities like search directions are NOT printed. * 'GENIPM.DETAILED'- for output of points being visited and search directions This symbol also implicitly defines 'GENIPM'. You can control output format by additionally specifying: * nothing to output in 6-digit exponential format * 'PREC.E15' to output in 15-digit exponential format * 'PREC.F6' to output in 6-digit fixed-point format By default trace is disabled and adds no overhead to the optimization process. However, specifying any of the symbols adds some formatting and output-related overhead. You may specify multiple symbols by separating them with commas: > > alglib::trace_file("GENIPM,PREC.F6", "path/to/trace.log") > -- ALGLIB -- Copyright 01.05.2024 by Bochkanov Sergey *************************************************************************/
void minqpsetalgodensegenipm(minqpstate &state, const double eps, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function tells QP solver to use DENSE-IPM QP algorithm and sets stopping criteria for the algorithm. This algorithm is intended for convex and semidefinite QP (but not QCQP or conic) problems with moderate (up to several thousands) variable count and arbitrary number of linear constraints. Quadratic and conic constraints are supported by another solver (DENSE-GENIPM). IMPORTANT: the commercial edition of ALGLIB can parallelize this function. See the ALGLIB Reference Manual for more information on how to activate parallelism support. IMPORTANT: this algorithm is likely to fail on nonconvex problems, furthermore, sometimes it fails without a notice. If you try to run DENSE-IPM on a problem with indefinite matrix (a matrix having at least one negative eigenvalue) then depending on the circumstances it may either (a) stall at some arbitrary point, or (b) throw an exception due to the failure of the Cholesky decomposition. Use GENIPM algorithm if your problem is nonconvex or has a potential of becoming nonconvex. The GENIPM solver can also handle problems with quadratic and conic constraints. ALGORITHM FEATURES: * supports box and dense/sparse general linear equality/inequality constraints ALGORITHM OUTLINE: * this algorithm is our implementation of interior point method as formulated by R.J.Vanderbei, with minor modifications to the algorithm (damped Newton directions are extensively used) * like all interior point methods, this algorithm tends to converge in roughly same number of iterations (between 15 and 50) independently from the problem dimensionality ALGORITHM LIMITATIONS: * because a dense Cholesky driver is used, for N-dimensional problem with M dense constaints this algorithm has O(N^2+N*M) memory requirements and O(N^3+M*N^2) running time. Having sparse constraints with Z nonzeros per row relaxes storage and running time down to O(N^2+M*Z) and O(N^3+M*Z^2) From the practical point of view, it limits its applicability by several thousands of variables. From the other side, variables count is the most limiting factor, and dependence on constraint count is much more lower. Assuming that the constraint matrix is sparse, it may handle tens of thousands of general linear constraints. INPUT PARAMETERS: State - structure which stores algorithm state Eps - >=0, stopping criteria. The algorithm stops when primal and dual infeasiblities as well as complementarity gap are less than Eps. IT IS VERY IMPORTANT TO CALL minqpsetscale() WHEN YOU USE THIS ALGORITHM BECAUSE ITS CONVERGENCE PROPERTIES AND STOPPING CRITERIA ARE SCALE-DEPENDENT! NOTE: Passing EpsX=0 will lead to automatic selection of small epsilon. ===== TRACING IPM SOLVER ================================================= IPM solver supports advanced tracing capabilities. You can trace algorithm output by specifying following trace symbols (case-insensitive) by means of trace_file() call: * 'IPM' - for basic trace of algorithm steps and decisions. Only short scalars (function values and deltas) are printed. N-dimensional quantities like search directions are NOT printed. * 'IPM.DETAILED'- for output of points being visited and search directions This symbol also implicitly defines 'IPM'. You can control output format by additionally specifying: * nothing to output in 6-digit exponential format * 'PREC.E15' to output in 15-digit exponential format * 'PREC.F6' to output in 6-digit fixed-point format By default trace is disabled and adds no overhead to the optimization process. However, specifying any of the symbols adds some formatting and output-related overhead. You may specify multiple symbols by separating them with commas: > > alglib::trace_file("IPM,PREC.F6", "path/to/trace.log") > -- ALGLIB -- Copyright 01.11.2019 by Bochkanov Sergey *************************************************************************/
void minqpsetalgodenseipm(minqpstate &state, const double eps, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells solver to use QuickQP algorithm: special extra-fast algorithm for problems with box-only constrants. It may solve non-convex problems as long as they are bounded from below under constraints. ALGORITHM FEATURES: * several times faster than DENSE-IPM when running on box-only problem * utilizes accelerated methods for activation of constraints. * supports dense and sparse QP problems * supports ONLY box constraints; general linear constraints are NOT supported by this solver * can solve all types of problems (convex, semidefinite, nonconvex) as long as they are bounded from below under constraints. Say, it is possible to solve "min{-x^2} subject to -1<=x<=+1". In convex/semidefinite case global minimum is returned, in nonconvex case - algorithm returns one of the local minimums. ALGORITHM OUTLINE: * algorithm performs two kinds of iterations: constrained CG iterations and constrained Newton iterations * initially it performs small number of constrained CG iterations, which can efficiently activate/deactivate multiple constraints * after CG phase algorithm tries to calculate Cholesky decomposition and to perform several constrained Newton steps. If Cholesky decomposition failed (matrix is indefinite even under constraints), we perform more CG iterations until we converge to such set of constraints that system matrix becomes positive definite. Constrained Newton steps greatly increase convergence speed and precision. * algorithm interleaves CG and Newton iterations which allows to handle indefinite matrices (CG phase) and quickly converge after final set of constraints is found (Newton phase). Combination of CG and Newton phases is called "outer iteration". * it is possible to turn off Newton phase (beneficial for semidefinite problems - Cholesky decomposition will fail too often) ALGORITHM LIMITATIONS: * algorithm does not support general linear constraints; only box ones are supported * Cholesky decomposition for sparse problems is performed with Skyline Cholesky solver, which is intended for low-profile matrices. No profile- reducing reordering of variables is performed in this version of ALGLIB. * problems with near-zero negative eigenvalues (or exacty zero ones) may experience about 2-3x performance penalty. The reason is that Cholesky decomposition can not be performed until we identify directions of zero and negative curvature and activate corresponding boundary constraints - but we need a lot of trial and errors because these directions are hard to notice in the matrix spectrum. In this case you may turn off Newton phase of algorithm. Large negative eigenvalues are not an issue, so highly non-convex problems can be solved very efficiently. INPUT PARAMETERS: State - structure which stores algorithm state EpsG - >=0 The subroutine finishes its work if the condition |v|<EpsG is satisfied, where: * |.| means Euclidian norm * v - scaled constrained gradient vector, v[i]=g[i]*s[i] * g - gradient * s - scaling coefficients set by MinQPSetScale() EpsF - >=0 The subroutine finishes its work if exploratory steepest descent step on k+1-th iteration satisfies following condition: |F(k+1)-F(k)|<=EpsF*max{|F(k)|,|F(k+1)|,1} EpsX - >=0 The subroutine finishes its work if exploratory steepest descent step on k+1-th iteration satisfies following condition: * |.| means Euclidian norm * v - scaled step vector, v[i]=dx[i]/s[i] * dx - step vector, dx=X(k+1)-X(k) * s - scaling coefficients set by MinQPSetScale() MaxOuterIts-maximum number of OUTER iterations. One outer iteration includes some amount of CG iterations (from 5 to ~N) and one or several (usually small amount) Newton steps. Thus, one outer iteration has high cost, but can greatly reduce funcation value. Use 0 if you do not want to limit number of outer iterations. UseNewton- use Newton phase or not: * Newton phase improves performance of positive definite dense problems (about 2 times improvement can be observed) * can result in some performance penalty on semidefinite or slightly negative definite problems - each Newton phase will bring no improvement (Cholesky failure), but still will require computational time. * if you doubt, you can turn off this phase - optimizer will retain its most of its high speed. IT IS VERY IMPORTANT TO CALL MinQPSetScale() WHEN YOU USE THIS ALGORITHM BECAUSE ITS STOPPING CRITERIA ARE SCALE-DEPENDENT! Passing EpsG=0, EpsF=0 and EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (presently it is small step length, but it may change in the future versions of ALGLIB). -- ALGLIB -- Copyright 22.05.2014 by Bochkanov Sergey *************************************************************************/
void minqpsetalgoquickqp(minqpstate &state, const double epsg, const double epsf, const double epsx, const ae_int_t maxouterits, const bool usenewton, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells QP solver to use an ECQP algorithm. This algorithm is intended for sparse convex problems with only linear equality constraints. It can handle millions of variables and constraints, assuming that the problem is sufficiently sparse. However, it can NOT deal with nonlinear equality constraints or inequality constraints of any type (including box ones), nor it can deal with nonconvex problems. When applicable, it outperforms SPARSE-IPM by tens of times. It is a regularized direct linear algebra solver that performs several rounds of iterative refinement in order to improve a solution. Thus, due to its direct nature, it does not need stopping criteria and performs much faster than interior point methods. IMPORTANT: the commercial edition of ALGLIB can parallelize this function. Specific speed-up due to parallelism heavily depends on a sparsity pattern of quadratic term and constraints. See the ALGLIB Reference Manual for more information on how to activate parallelism support. IMPORTANT: internally this solver performs large and sparse (N+M)x(N+M) triangular factorization. So it expects both quadratic term and constraints to be highly sparse. However, its running time is influenced by BOTH fill factor and sparsity pattern. Generally we expect that no more than few nonzero elements per row are present. However different sparsity patterns may result in completely different running times even given same fill factor. INPUT PARAMETERS: State - structure which stores algorithm state Eps - >=0, stopping criteria. The algorithm stops when primal and dual infeasiblities are less than Eps. IT IS VERY IMPORTANT TO CALL minqpsetscale() WHEN YOU USE THIS ALGORITHM BECAUSE ITS CONVERGENCE PROPERTIES AND STOPPING CRITERIA ARE SCALE-DEPENDENT! NOTE: Passing EpsX=0 will lead to automatic selection of small epsilon. -- ALGLIB -- Copyright 01.07.2024 by Bochkanov Sergey *************************************************************************/
void minqpsetalgosparseecqp(minqpstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells QP solver to use SPARSE-GENIPM QP algorithm and sets stopping criteria for the algorithm. This algorithm is intended for convex/nonconvex box/linearly/conically/ quadratically constrained QP problems with sparse quadratic term and constraints. It can handle millions of variables and constraints, assuming that the problem is sufficiently sparse. If your problem is small (several thousands vars at most) and dense, consider using DENSE-GENIPM as a more efficient alternative. The algorithm is a generalization of the SPARSE-IPM solver, capable of handling more general constraints as well as nonconvexity of the target. In the latter case, a local solution is found. IMPORTANT: the commercial edition of ALGLIB can parallelize this function. Specific speed-up due to parallelism heavily depends on a sparsity pattern of quadratic term and constraints. See the ALGLIB Reference Manual for more information on how to activate parallelism support. IMPORTANT: internally this solver performs large and sparse (N+M)x(N+M) triangular factorization. So it expects both quadratic term and constraints to be highly sparse. However, its running time is influenced by BOTH fill factor and sparsity pattern. Generally we expect that no more than few nonzero elements per row are present. However different sparsity patterns may result in completely different running times even given same fill factor. In many cases this algorithm outperforms DENSE-IPM by order of magnitude. However, in some cases you may get better results with DENSE-IPM even when solving sparse task. ALGORITHM FEATURES: * supports box, linear equality/inequality constraints * for convex problems returns the global (and the only) solution * can handle non-convex problem (only a locally optimal solution is returned in this case) * specializes on large-scale sparse problems ALGORITHM LIMITATIONS: * this algorithm may handle moderate number of dense constraints, usually no more than a thousand of dense ones without losing its efficiency. INPUT PARAMETERS: State - structure which stores algorithm state Eps - >=0, stopping criteria. The algorithm stops when primal and dual infeasiblities as well as complementarity gap are less than Eps. IT IS VERY IMPORTANT TO CALL minqpsetscale() WHEN YOU USE THIS ALGORITHM BECAUSE ITS CONVERGENCE PROPERTIES AND STOPPING CRITERIA ARE SCALE-DEPENDENT! NOTE: Passing EpsX=0 will lead to automatic selection of small epsilon. ===== TRACING GENIPM SOLVER ============================================== GENIPM solver supports advanced tracing capabilities. You can log algorithm output by specifying following trace symbols (case-insensitive) by means of trace_file() call: * 'GENIPM' - for basic trace of algorithm steps and decisions. Only short scalars (function values and deltas) are printed. N-dimensional quantities like search directions are NOT printed. * 'GENIPM.DETAILED'- for output of points being visited and search directions This symbol also implicitly defines 'IPM'. You can control output format by additionally specifying: * nothing to output in 6-digit exponential format * 'PREC.E15' to output in 15-digit exponential format * 'PREC.F6' to output in 6-digit fixed-point format By default trace is disabled and adds no overhead to the optimization process. However, specifying any of the symbols adds some formatting and output-related overhead. You may specify multiple symbols by separating them with commas: > > alglib::trace_file("GENIPM,PREC.F6", "path/to/trace.log") > -- ALGLIB -- Copyright 01.05.2024 by Bochkanov Sergey *************************************************************************/
void minqpsetalgosparsegenipm(minqpstate &state, const double eps, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function tells QP solver to use SPARSE-IPM QP algorithm and sets stopping criteria for the algorithm. This algorithm is intended for convex and semidefinite QP (but not QCQP or conic) problems with large variable and constraint count and sparse quadratic term and sparse linear constraints. It was successfully used for problems with millions of variables and constraints. Quadratic and conic constraints are supported by another solver (SPARSE-GENIPM). It is possible to have some limited set of dense linear constraints - they will be handled separately by the dense BLAS - but the more dense constraints you have, the more time solver needs. IMPORTANT: the commercial edition of ALGLIB can parallelize this function. Specific speed-up due to parallelism heavily depends on a sparsity pattern of quadratic term and constraints. See the ALGLIB Reference Manual for more information on how to activate parallelism support. IMPORTANT: internally this solver performs large and sparse (N+M)x(N+M) triangular factorization. So it expects both quadratic term and constraints to be highly sparse. However, its running time is influenced by BOTH fill factor and sparsity pattern. Generally we expect that no more than few nonzero elements per row are present. However different sparsity patterns may result in completely different running times even given same fill factor. In many cases this algorithm outperforms DENSE-IPM by order of magnitude. However, in some cases you may get better results with DENSE-IPM even when solving sparse task. IMPORTANT: this algorithm won't work for nonconvex problems. If you try to run SPARSE-IPM on a problem with indefinite quadratic term (a matrix having at least one negative eigenvalue) then depending on the circumstances it may either (a) stall at some arbitrary point, or (b) throw an exception due to the failure of the Cholesky decomposition. Use GENIPM algorithm if your problem is nonconvex or has a potential of becoming nonconvex. The GENIPM solver can also handle problems with quadratic and conic constraints. ALGORITHM FEATURES: * supports box and dense/sparse general linear equality/inequality constraints * specializes on large-scale sparse problems ALGORITHM OUTLINE: * this algorithm is our implementation of interior point method as formulated by R.J.Vanderbei, with minor modifications to the algorithm (damped Newton directions are extensively used) * like all interior point methods, this algorithm tends to converge in roughly same number of iterations (between 15 and 50) independently from the problem dimensionality ALGORITHM LIMITATIONS: * this algorithm may handle moderate number of dense constraints, usually no more than a thousand of dense ones without losing its efficiency. INPUT PARAMETERS: State - structure which stores algorithm state Eps - >=0, stopping criteria. The algorithm stops when primal and dual infeasiblities as well as complementarity gap are less than Eps. IT IS VERY IMPORTANT TO CALL minqpsetscale() WHEN YOU USE THIS ALGORITHM BECAUSE ITS CONVERGENCE PROPERTIES AND STOPPING CRITERIA ARE SCALE-DEPENDENT! NOTE: Passing EpsX=0 will lead to automatic selection of small epsilon. ===== TRACING IPM SOLVER ================================================= IPM solver supports advanced tracing capabilities. You can trace algorithm output by specifying following trace symbols (case-insensitive) by means of trace_file() call: * 'IPM' - for basic trace of algorithm steps and decisions. Only short scalars (function values and deltas) are printed. N-dimensional quantities like search directions are NOT printed. * 'IPM.DETAILED'- for output of points being visited and search directions This symbol also implicitly defines 'IPM'. You can control output format by additionally specifying: * nothing to output in 6-digit exponential format * 'PREC.E15' to output in 15-digit exponential format * 'PREC.F6' to output in 6-digit fixed-point format By default trace is disabled and adds no overhead to the optimization process. However, specifying any of the symbols adds some formatting and output-related overhead. You may specify multiple symbols by separating them with commas: > > alglib::trace_file("IPM,PREC.F6", "path/to/trace.log") > -- ALGLIB -- Copyright 01.11.2019 by Bochkanov Sergey *************************************************************************/
void minqpsetalgosparseipm(minqpstate &state, const double eps, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets box constraints for QP solver Box constraints are inactive by default (after initial creation). After being set, they are preserved until explicitly overwritten with another minqpsetbc() or minqpsetbcall() call, or partially overwritten with minqpsetbci() call. Following types of constraints are supported: DESCRIPTION CONSTRAINT HOW TO SPECIFY fixed variable x[i]=Bnd[i] BndL[i]=BndU[i] lower bound BndL[i]<=x[i] BndU[i]=+INF upper bound x[i]<=BndU[i] BndL[i]=-INF range BndL[i]<=x[i]<=BndU[i] ... free variable - BndL[I]=-INF, BndU[I]+INF INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF (latter is recommended because it will allow solver to use better algorithm). BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF (latter is recommended because it will allow solver to use better algorithm). NOTE: infinite values can be specified by means of Double.PositiveInfinity and Double.NegativeInfinity (in C#) and alglib::fp_posinf and alglib::fp_neginf (in C++). NOTE: you may replace infinities by very small/very large values, but it is not recommended because large numbers may introduce large numerical errors in the algorithm. NOTE: if constraints for all variables are same you may use minqpsetbcall() which allows to specify constraints without using arrays. NOTE: BndL>BndU will result in QP problem being recognized as infeasible. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpsetbc(minqpstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets box constraints for QP solver (all variables at once, same constraints for all variables) Box constraints are inactive by default (after initial creation). After being set, they are preserved until explicitly overwritten with another minqpsetbc() or minqpsetbcall() call, or partially overwritten with minqpsetbci() call. Following types of constraints are supported: DESCRIPTION CONSTRAINT HOW TO SPECIFY fixed variable x[i]=Bnd BndL=BndU lower bound BndL<=x[i] BndU=+INF upper bound x[i]<=BndU BndL=-INF range BndL<=x[i]<=BndU ... free variable - BndL=-INF, BndU+INF INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bound, same for all variables BndU - upper bound, same for all variables NOTE: infinite values can be specified by means of Double.PositiveInfinity and Double.NegativeInfinity (in C#) and alglib::fp_posinf and alglib::fp_neginf (in C++). NOTE: you may replace infinities by very small/very large values, but it is not recommended because large numbers may introduce large numerical errors in the algorithm. NOTE: BndL>BndU will result in QP problem being recognized as infeasible. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpsetbcall(minqpstate &state, const double bndl, const double bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets box constraints for I-th variable (other variables are not modified). Following types of constraints are supported: DESCRIPTION CONSTRAINT HOW TO SPECIFY fixed variable x[i]=Bnd BndL=BndU lower bound BndL<=x[i] BndU=+INF upper bound x[i]<=BndU BndL=-INF range BndL<=x[i]<=BndU ... free variable - BndL=-INF, BndU+INF INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bound BndU - upper bound NOTE: infinite values can be specified by means of Double.PositiveInfinity and Double.NegativeInfinity (in C#) and alglib::fp_posinf and alglib::fp_neginf (in C++). NOTE: you may replace infinities by very small/very large values, but it is not recommended because large numbers may introduce large numerical errors in the algorithm. NOTE: BndL>BndU will result in QP problem being recognized as infeasible. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpsetbci(minqpstate &state, const ae_int_t i, const double bndl, const double bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets dense linear constraints for QP optimizer. This function overrides results of previous calls to minqpsetlc(), minqpsetlcsparse() and minqpsetlcmixed(). After call to this function all non-box constraints are dropped, and you have only those constraints which were specified in the present call. If you want to specify mixed (with dense and sparse terms) linear constraints, you should call minqpsetlcmixed(). INPUT PARAMETERS: State - structure previously allocated with MinQPCreate call. C - linear constraints, array[K,N+1]. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. CT - type of constraints, array[K]: * if CT[i]>0, then I-th constraint is C[i,*]*x >= C[i,n+1] * if CT[i]=0, then I-th constraint is C[i,*]*x = C[i,n+1] * if CT[i]<0, then I-th constraint is C[i,*]*x <= C[i,n+1] K - number of equality/inequality constraints, K>=0: * if given, only leading K elements of C/CT are used * if not given, automatically determined from sizes of C/CT NOTE 1: linear (non-bound) constraints are satisfied only approximately - there always exists some violation due to numerical errors and algorithmic limitations. -- ALGLIB -- Copyright 19.06.2012 by Bochkanov Sergey *************************************************************************/
void minqpsetlc(minqpstate &state, const real_2d_array &c, const integer_1d_array &ct, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minqpsetlc(minqpstate &state, const real_2d_array &c, const integer_1d_array &ct, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with sparse constraining matrix A. Recommended for large-scale problems. This function overwrites linear (non-box) constraints set by previous calls (if such calls were made). INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. A - sparse matrix with size [K,N] (exactly!). Each row of A represents one general linear constraint. A can be stored in any sparse storage format. AL, AU - lower and upper bounds, array[K]; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0. If K=0 is specified, A, AL, AU are ignored. -- ALGLIB -- Copyright 01.11.2019 by Bochkanov Sergey *************************************************************************/
void minqpsetlc2(minqpstate &state, const sparsematrix &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t k, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with dense constraint matrix A. NOTE: knowing that constraint matrix is dense helps some QP solvers (especially modern IPM method) to utilize efficient dense Level 3 BLAS for dense parts of the problem. If your problem has both dense and sparse constraints, you can use minqpsetlc2mixed() function, which will result in dense algebra being applied to dense terms, and sparse sparse linear algebra applied to sparse terms. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. A - linear constraints, array[K,N]. Each row of A represents one constraint. One-sided inequality constraints, two- sided inequality constraints, equality constraints are supported (see below) AL, AU - lower and upper bounds, array[K]; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0; if not given, inferred from sizes of A, AL, AU. -- ALGLIB -- Copyright 01.11.2019 by Bochkanov Sergey *************************************************************************/
void minqpsetlc2dense(minqpstate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t k, const xparams _xparams = alglib::xdefault); void minqpsetlc2dense(minqpstate &state, const real_2d_array &a, const real_1d_array &al, const real_1d_array &au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets two-sided linear constraints AL <= A*x <= AU with mixed constraining matrix A including sparse part (first SparseK rows) and dense part (last DenseK rows). Recommended for large-scale problems. This function overwrites linear (non-box) constraints set by previous calls (if such calls were made). This function may be useful if constraint matrix includes large number of both types of rows - dense and sparse. If you have just a few sparse rows, you may represent them in dense format without losing performance. Similarly, if you have just a few dense rows, you may store them in sparse format with almost same performance. INPUT PARAMETERS: State - structure previously allocated with minqpcreate() call. SparseA - sparse matrix with size [K,N] (exactly!). Each row of A represents one general linear constraint. A can be stored in any sparse storage format. SparseK - number of sparse constraints, SparseK>=0 DenseA - linear constraints, array[K,N], set of dense constraints. Each row of A represents one general linear constraint. DenseK - number of dense constraints, DenseK>=0 AL, AU - lower and upper bounds, array[SparseK+DenseK], with former SparseK elements corresponding to sparse constraints, and latter DenseK elements corresponding to dense constraints; * AL[i]=AU[i] => equality constraint Ai*x * AL[i]<AU[i] => two-sided constraint AL[i]<=Ai*x<=AU[i] * AL[i]=-INF => one-sided constraint Ai*x<=AU[i] * AU[i]=+INF => one-sided constraint AL[i]<=Ai*x * AL[i]=-INF, AU[i]=+INF => constraint is ignored K - number of equality/inequality constraints, K>=0. If K=0 is specified, A, AL, AU are ignored. -- ALGLIB -- Copyright 01.11.2019 by Bochkanov Sergey *************************************************************************/
void minqpsetlc2mixed(minqpstate &state, const sparsematrix &sparsea, const ae_int_t ksparse, const real_2d_array &densea, const ae_int_t kdense, const real_1d_array &al, const real_1d_array &au, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets mixed linear constraints, which include a set of dense rows, and a set of sparse rows. This function overrides results of previous calls to minqpsetlc(), minqpsetlcsparse() and minqpsetlcmixed(). This function may be useful if constraint matrix includes large number of both types of rows - dense and sparse. If you have just a few sparse rows, you may represent them in dense format without losing performance. Similarly, if you have just a few dense rows, you may store them in sparse format with almost same performance. INPUT PARAMETERS: State - structure previously allocated with MinQPCreate call. SparseC - linear constraints, sparse matrix with dimensions EXACTLY EQUAL TO [SparseK,N+1]. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. SparseCT- type of sparse constraints, array[K]: * if SparseCT[i]>0, then I-th constraint is SparseC[i,*]*x >= SparseC[i,n+1] * if SparseCT[i]=0, then I-th constraint is SparseC[i,*]*x = SparseC[i,n+1] * if SparseCT[i]<0, then I-th constraint is SparseC[i,*]*x <= SparseC[i,n+1] SparseK - number of sparse equality/inequality constraints, K>=0 DenseC - dense linear constraints, array[K,N+1]. Each row of DenseC represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of DenseC (including right part) must be finite. DenseCT - type of constraints, array[K]: * if DenseCT[i]>0, then I-th constraint is DenseC[i,*]*x >= DenseC[i,n+1] * if DenseCT[i]=0, then I-th constraint is DenseC[i,*]*x = DenseC[i,n+1] * if DenseCT[i]<0, then I-th constraint is DenseC[i,*]*x <= DenseC[i,n+1] DenseK - number of equality/inequality constraints, DenseK>=0 NOTE 1: linear (non-box) constraints are satisfied only approximately - there always exists some violation due to numerical errors and algorithmic limitations. NOTE 2: due to backward compatibility reasons SparseC can be larger than [SparseK,N+1]. In this case only leading [SparseK,N+1] submatrix will be used. However, the rest of ALGLIB has more strict requirements on the input size, so we recommend you to pass sparse term whose size exactly matches algorithm expectations. -- ALGLIB -- Copyright 22.08.2016 by Bochkanov Sergey *************************************************************************/
void minqpsetlcmixed(minqpstate &state, const sparsematrix &sparsec, const integer_1d_array &sparsect, const ae_int_t sparsek, const real_2d_array &densec, const integer_1d_array &densect, const ae_int_t densek, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function provides legacy API for specification of mixed dense/sparse linear constraints. New conventions used by ALGLIB since release 3.16.0 state that set of sparse constraints comes first, followed by set of dense ones. This convention is essential when you talk about things like order of Lagrange multipliers. However, legacy API accepted mixed constraints in reverse order. This function is here to simplify situation with code relying on legacy API. It simply accepts constraints in one order (old) and passes them to new API, now in correct order. -- ALGLIB -- Copyright 01.11.2019 by Bochkanov Sergey *************************************************************************/
void minqpsetlcmixedlegacy(minqpstate &state, const real_2d_array &densec, const integer_1d_array &densect, const ae_int_t densek, const sparsematrix &sparsec, const integer_1d_array &sparsect, const ae_int_t sparsek, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets sparse linear constraints for QP optimizer. This function overrides results of previous calls to minqpsetlc(), minqpsetlcsparse() and minqpsetlcmixed(). After call to this function all non-box constraints are dropped, and you have only those constraints which were specified in the present call. If you want to specify mixed (with dense and sparse terms) linear constraints, you should call minqpsetlcmixed(). INPUT PARAMETERS: State - structure previously allocated with MinQPCreate call. C - linear constraints, sparse matrix with dimensions at least [K,N+1]. If matrix has larger size, only leading Kx(N+1) rectangle is used. Each row of C represents one constraint, either equality or inequality (see below): * first N elements correspond to coefficients, * last element corresponds to the right part. All elements of C (including right part) must be finite. CT - type of constraints, array[K]: * if CT[i]>0, then I-th constraint is C[i,*]*x >= C[i,n+1] * if CT[i]=0, then I-th constraint is C[i,*]*x = C[i,n+1] * if CT[i]<0, then I-th constraint is C[i,*]*x <= C[i,n+1] K - number of equality/inequality constraints, K>=0 NOTE 1: linear (non-bound) constraints are satisfied only approximately - there always exists some violation due to numerical errors and algorithmic limitations. -- ALGLIB -- Copyright 22.08.2016 by Bochkanov Sergey *************************************************************************/
void minqpsetlcsparse(minqpstate &state, const sparsematrix &c, const integer_1d_array &ct, const ae_int_t k, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets linear term for QP solver. By default, linear term is zero. INPUT PARAMETERS: State - structure which stores algorithm state B - linear term, array[N]. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpsetlinearterm(minqpstate &state, const real_1d_array &b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* This function sets origin for QP solver. By default, following QP program is solved: min(0.5*x'*A*x+b'*x) This function allows to solve a different problem: min(0.5*(x-x_origin)'*A*(x-x_origin)+b'*(x-x_origin)) Specification of non-zero origin affects function being minimized and quadratic/conic constraints, but not box and linear constraints which are still calculated without origin. INPUT PARAMETERS: State - structure which stores algorithm state XOrigin - origin, array[N]. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpsetorigin(minqpstate &state, const real_1d_array &xorigin, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets dense quadratic term for QP solver. By default, quadratic term is zero. IMPORTANT: This solver minimizes following function: f(x) = 0.5*x'*A*x + b'*x. Note that quadratic term has 0.5 before it. So if you want to minimize f(x) = x^2 + x you should rewrite your problem as follows: f(x) = 0.5*(2*x^2) + x and your matrix A will be equal to [[2.0]], not to [[1.0]] INPUT PARAMETERS: State - structure which stores algorithm state A - matrix, array[N,N] IsUpper - storage type: * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpsetquadraticterm(minqpstate &state, const real_2d_array &a, const bool isupper, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  

/************************************************************************* This function sets sparse quadratic term for QP solver. By default, quadratic term is zero. This function overrides previous calls to minqpsetquadraticterm() or minqpsetquadratictermsparse(). NOTE: dense solvers like DENSE-AUL-QP or DENSE-IPM-QP will convert this matrix to dense storage anyway. IMPORTANT: This solver minimizes following function: f(x) = 0.5*x'*A*x + b'*x. Note that quadratic term has 0.5 before it. So if you want to minimize f(x) = x^2 + x you should rewrite your problem as follows: f(x) = 0.5*(2*x^2) + x and your matrix A will be equal to [[2.0]], not to [[1.0]] INPUT PARAMETERS: State - structure which stores algorithm state A - matrix, array[N,N] IsUpper - (optional) storage type: * if True, symmetric matrix A is given by its upper triangle, and the lower triangle isn't used * if False, symmetric matrix A is given by its lower triangle, and the upper triangle isn't used * if not given, both lower and upper triangles must be filled. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpsetquadratictermsparse(minqpstate &state, const sparsematrix &a, const bool isupper, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets scaling coefficients. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances) and as preconditioner. Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function If you do not know how to choose scales of your variables, you can: * read www.alglib.net/optimization/scaling.php article * use minqpsetscaleautodiag(), which calculates scale using diagonal of the quadratic term: S is set to 1/sqrt(diag(A)), which works well sometimes. INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 14.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpsetscale(minqpstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets automatic evaluation of variable scaling. IMPORTANT: this function works only for matrices with positive diagonal elements! Zero or negative elements will result in -9 error code being returned. Specify scale vector manually with minqpsetscale() in such cases. ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances) and as preconditioner. The best way to set scaling is to manually specify variable scales. However, sometimes you just need quick-and-dirty solution - either when you perform fast prototyping, or when you know your problem well and you are 100% sure that this quick solution is robust enough in your case. One such solution is to evaluate scale of I-th variable as 1/Sqrt(A[i,i]), where A[i,i] is an I-th diagonal element of the quadratic term. Such approach works well sometimes, but you have to be careful here. INPUT PARAMETERS: State - structure stores algorithm state -- ALGLIB -- Copyright 26.12.2017 by Bochkanov Sergey *************************************************************************/
void minqpsetscaleautodiag(minqpstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets starting point for QP solver. It is useful to have good initial approximation to the solution, because it will increase speed of convergence and identification of active constraints. NOTE: interior point solvers ignore initial point provided by user. INPUT PARAMETERS: State - structure which stores algorithm state X - starting point, array[N]. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void minqpsetstartingpoint(minqpstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of a quadratic subject to L1 penalty,
        // i.e. minimization of 
        //
        //     F(x0,x1) = 0.5*x'Ax + b'x + penalty
        //              = 0.5*(2*x0^2 + 2*x1^2) - 6*x0 - 4*x1 + 5*|x0| + 5*|x1|.
        //
        // The exact solution is
        //
        //     [x0,x1] = [0.5,0]
        //
        // The L1 penalty can be modeled by adding two slack variables s0 and s1,
        // constraining them from below using second order or power cones as follows:
        //
        //     sqrt(x0^2) <= s0         this constraint is modelled as a second order cone
        //     sqrt(x1^2) <= s1         this constraint is modelled as a power cone
        //
        // and by adding 5*s0+5*s1 to the linear term. This trick increases variables
        // count by 2x, so in real life problems you may want to use SPARSE-GENIPM solver
        // to deal with the reformulated problem (this solver is more efficient when multiple
        // slacks are present).
        //
        real_2d_array a = "[[2,0,0,0],[0,2,0,0],[0,0,0,0],[0,0,0,0]]";
        real_1d_array b = "[-6,-4,5,5]";
        real_1d_array s = "[1,1,1,1]";
        bool isupper = true;
        real_1d_array x;
        minqpstate state;
        minqpreport rep;

        //
        // create the solver, set quadratic/linear terms
        //
        minqpcreate(4, state);
        minqpsetquadraticterm(state, a, isupper);
        minqpsetlinearterm(state, b);

        //
        // Set the scale of the parameters.
        //
        minqpsetscale(state, s);

        //
        // Specify constraints on slacks. The second order cone constraint is a bit more efficient,
        // but it does not allow to specify penalties other than the L1. The power cone constraint
        // takes a bit more time to handle, but it can be used to model |x|^alpha penalty for
        // any alpha>=1.
        //
        // Functions below add so called primitive conic constraints, ones that constrain a square
        // root of a squared sum of variables in [idx0,idx1-1] range by a variable with index axisIdx.
        // Below the second order cone constraint has idx0=0, idx1=1, axisIdx=2; the power cone
        // constraint has idx0=1, idx1=2, axisIdx=3.
        //
        // More general formulation can be specified with minqpaddsoccorthogonal() or minqpaddpowccorthogonal().
        //
        double alpha = 1.0;
        minqpaddsoccprimitive(state, 0, 1, 2, false);
        minqpaddpowccprimitive(state, 1, 2, 3, alpha, false);

        //
        // Solve problem with the sparse interior-point method (SPARSE-GENIPM) solver.
        //
        // This solver is intended for large-scale sparse problems with box, linear,
        // quadratic and conic constraints, but it will work on such a toy problem too.
        //
        // Commercial ALGLIB can parallelize sparse Cholesky factorization which is the
        // most time-consuming part of the algorithm. See the ALGLIB Reference Manual for
        // more information on how to activate parallelism support.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgosparsegenipm(state, 0.0);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [0.5,0,0.5,0]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of F(x0,x1) = x0^2 + x1^2 -6*x0 - 4*x1
        // subject to Box constraints 0<=x0<=2.5, 0<=x1<=2.5
        //
        // Exact solution is [x0,x1] = [2.5,2]
        //
        // IMPORTANT: this solver minimizes  following  function:
        //
        //     f(x) = 0.5*x'*A*x + b'*x.
        //
        // Note that quadratic term has 0.5 before it. So if you want to minimize
        // quadratic function, you should rewrite it in such way that quadratic term
        // is multiplied by 0.5 too.
        //
        // For example, our function is f(x)=x0^2+x1^2+..., but we rewrite it as 
        //
        //     f(x) = 0.5*(2*x0^2+2*x1^2) + ....
        //
        // and pass diag(2,2) as quadratic term - NOT diag(1,1)!
        //
        real_2d_array a = "[[2,0],[0,2]]";
        real_1d_array b = "[-6,-4]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[0.0,0.0]";
        real_1d_array bndu = "[2.5,2.5]";
        bool isupper = true;
        real_1d_array x;
        minqpstate state;
        minqpreport rep;

        // create solver, set quadratic/linear terms
        minqpcreate(2, state);
        minqpsetquadraticterm(state, a, isupper);
        minqpsetlinearterm(state, b);
        minqpsetbc(state, bndl, bndu);

        // Set scale of the parameters.
        // It is strongly recommended that you set scale of your variables.
        // Knowing their scales is essential for evaluation of stopping criteria
        // and for preconditioning of the algorithm steps.
        // You can find more information on scaling at http://www.alglib.net/optimization/scaling.php
        //
        // NOTE: for convex problems you may try using minqpsetscaleautodiag()
        //       which automatically determines variable scales.
        minqpsetscale(state, s);

        //
        // Solve problem with the sparse interior-point method (sparse IPM) solver.
        //
        // This solver is intended for large-scale sparse problems with box and linear
        // constraints, but it will work on such a toy problem too.
        //
        // Commercial ALGLIB can parallelize sparse Cholesky factorization which is the
        // most time-consuming part of the algorithm. See the ALGLIB Reference Manual for
        // more information on how to activate parallelism support.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgosparseipm(state, 0.0);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [2.5,2]

        //
        // Solve problem with dense IPM solver.
        //
        // This solver is optimized for problems with dense linear constraints and/or
        // dense quadratic term.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgodenseipm(state, 0.0);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [2.5,2]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of F(x0,x1) = x0^2 + x1^2 -6*x0 - 4*x1
        // subject to linear constraint x0+x1<=2
        //
        // Exact solution is [x0,x1] = [1.5,0.5]
        //
        // IMPORTANT: this solver minimizes  following  function:
        //
        //     f(x) = 0.5*x'*A*x + b'*x.
        //
        // Note that quadratic term has 0.5 before it. So if you want to minimize
        // quadratic function, you should rewrite it in such way that quadratic term
        // is multiplied by 0.5 too.
        //
        // For example, our function is f(x)=x0^2+x1^2+..., but we rewrite it as 
        //
        //     f(x) = 0.5*(2*x0^2+2*x1^2) + ....
        //
        // and pass diag(2,2) as quadratic term - NOT diag(1,1)!
        //
        real_2d_array a = "[[2,0],[0,2]]";
        real_1d_array b = "[-6,-4]";
        real_1d_array s = "[1,1]";
        real_2d_array c = "[[1.0,1.0,2.0]]";
        integer_1d_array ct = "[-1]";
        bool isupper = true;
        real_1d_array x;
        minqpstate state;
        minqpreport rep;

        // create solver, set quadratic/linear terms
        minqpcreate(2, state);
        minqpsetquadraticterm(state, a, isupper);
        minqpsetlinearterm(state, b);
        minqpsetlc(state, c, ct);

        // Set scale of the parameters.
        // It is strongly recommended that you set scale of your variables.
        // Knowing their scales is essential for evaluation of stopping criteria
        // and for preconditioning of the algorithm steps.
        // You can find more information on scaling at http://www.alglib.net/optimization/scaling.php
        //
        // NOTE: for convex problems you may try using minqpsetscaleautodiag()
        //       which automatically determines variable scales.
        minqpsetscale(state, s);

        //
        // Solve problem with the sparse interior-point method (sparse IPM) solver.
        //
        // This solver is intended for large-scale sparse problems with box and linear
        // constraints, but it will work on such a toy problem too.
        //
        // Commercial ALGLIB can parallelize sparse Cholesky factorization which is the
        // most time-consuming part of the algorithm. See the ALGLIB Reference Manual for
        // more information on how to activate parallelism support.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgosparseipm(state, 0.0);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [1.5,0.5]

        //
        // Solve problem with dense IPM solver.
        //
        // This solver is optimized for problems with dense linear constraints and/or
        // dense quadratic term.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgodenseipm(state, 0.0);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [1.5,0.5]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of nonconvex function
        //     F(x0,x1) = -(x0^2+x1^2)
        // subject to constraints x0,x1 in [1.0,2.0]
        // Exact solution is [x0,x1] = [2,2].
        //
        // Non-convex problems are harder to solve than convex ones, and they
        // may have more than one local minimum. However, ALGLIB solvers may deal
        // with such problems (although they do not guarantee convergence to
        // global minimum).
        //
        // IMPORTANT: this solver minimizes  following  function:
        //     f(x) = 0.5*x'*A*x + b'*x.
        // Note that quadratic term has 0.5 before it. So if you want to minimize
        // quadratic function, you should rewrite it in such way that quadratic term
        // is multiplied by 0.5 too.
        //
        // For example, our function is f(x)=-(x0^2+x1^2), but we rewrite it as 
        //     f(x) = 0.5*(-2*x0^2-2*x1^2)
        // and pass diag(-2,-2) as quadratic term - NOT diag(-1,-1)!
        //
        real_2d_array a = "[[-2,0],[0,-2]]";
        real_1d_array x0 = "[1,1]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[1.0,1.0]";
        real_1d_array bndu = "[2.0,2.0]";
        bool isupper = true;
        real_1d_array x;
        minqpstate state;
        minqpreport rep;

        // create solver, set quadratic/linear terms, constraints
        minqpcreate(2, state);
        minqpsetquadraticterm(state, a, isupper);
        minqpsetstartingpoint(state, x0);
        minqpsetbc(state, bndl, bndu);

        // Set scale of the parameters.
        // It is strongly recommended that you set scale of your variables.
        // Knowing their scales is essential for evaluation of stopping criteria
        // and for preconditioning of the algorithm steps.
        // You can find more information on scaling at http://www.alglib.net/optimization/scaling.php
        //
        // NOTE: there also exists minqpsetscaleautodiag() function
        //       which automatically determines variable scales; however,
        //       it does NOT work for non-convex problems.
        minqpsetscale(state, s);

        //
        // Solve problem with DENSE-GENIPM solver.
        //
        // This solver is optimized for nonconvex problems with up to several thousands of
        // variables and large amount of general linear constraints.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgodensegenipm(state, 1.0e-9);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [2,2]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of F(x0,x1) = x0^2 + x1^2 -6*x0 - 4*x1
        //
        // Exact solution is [x0,x1] = [3,2]
        //
        // IMPORTANT: this solver minimizes  following  function:
        //
        //     f(x) = 0.5*x'*A*x + b'*x.
        //
        // Note that quadratic term has 0.5 before it. So if you want to minimize
        // quadratic function, you should rewrite it in such way that quadratic term
        // is multiplied by 0.5 too.
        //
        // For example, our function is f(x)=x0^2+x1^2+..., but we rewrite it as 
        //
        //     f(x) = 0.5*(2*x0^2+2*x1^2) + .... 
        //
        // and pass diag(2,2) as quadratic term - NOT diag(1,1)!
        //
        real_2d_array a = "[[2,0],[0,2]]";
        real_1d_array b = "[-6,-4]";
        real_1d_array s = "[1,1]";
        bool isupper = true;
        real_1d_array x;
        minqpstate state;
        minqpreport rep;

        // create the solver, set quadratic/linear terms
        minqpcreate(2, state);
        minqpsetquadraticterm(state, a, isupper);
        minqpsetlinearterm(state, b);

        // Set the scale of the parameters.
        // It is strongly recommended that you set the scale of your variables.
        // Knowing their scales is essential for evaluation of stopping criteria
        // and for preconditioning of the algorithm steps.
        // You can find more information on scaling at http://www.alglib.net/optimization/scaling.php
        //
        // NOTE: for convex problems you may try using minqpsetscaleautodiag()
        //       which automatically determines variable scales.
        minqpsetscale(state, s);

        //
        // Solve problem with the sparse interior-point method (sparse IPM) solver.
        //
        // This solver is intended for large-scale sparse problems with box and linear
        // constraints, but it will work on such a toy problem too.
        //
        // Commercial ALGLIB can parallelize sparse Cholesky factorization which is the
        // most time-consuming part of the algorithm. See the ALGLIB Reference Manual for
        // more information on how to activate parallelism support.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgosparseipm(state, 0.0);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [3,2]

        //
        // Solve problem with dense IPM solver.
        //
        // This solver is optimized for problems with dense linear constraints and/or
        // dense quadratic term.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgodenseipm(state, 0.0);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [3,2]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of F(x0,x1) = x0^2 + x1^2 -6*x0 - 4*x1,
        // with quadratic term given by sparse matrix structure.
        //
        // Exact solution is [x0,x1] = [3,2]
        //
        // We provide algorithm with starting point, although in this case
        // (dense matrix, no constraints) it can work without such information.
        //
        // IMPORTANT: this solver minimizes  following  function:
        //     f(x) = 0.5*x'*A*x + b'*x.
        // Note that quadratic term has 0.5 before it. So if you want to minimize
        // quadratic function, you should rewrite it in such way that quadratic term
        // is multiplied by 0.5 too.
        //
        // For example, our function is f(x)=x0^2+x1^2+..., but we rewrite it as 
        //     f(x) = 0.5*(2*x0^2+2*x1^2) + ....
        // and pass diag(2,2) as quadratic term - NOT diag(1,1)!
        //
        sparsematrix a;
        real_1d_array b = "[-6,-4]";
        real_1d_array x0 = "[0,1]";
        real_1d_array s = "[1,1]";
        real_1d_array x;
        minqpstate state;
        minqpreport rep;

        // initialize sparsematrix structure
        sparsecreate(2, 2, 0, a);
        sparseset(a, 0, 0, 2.0);
        sparseset(a, 1, 1, 2.0);

        // create solver, set quadratic/linear terms
        minqpcreate(2, state);
        minqpsetquadratictermsparse(state, a, true);
        minqpsetlinearterm(state, b);
        minqpsetstartingpoint(state, x0);

        // Set scale of the parameters.
        // It is strongly recommended that you set scale of your variables.
        // Knowing their scales is essential for evaluation of stopping criteria
        // and for preconditioning of the algorithm steps.
        // You can find more information on scaling at http://www.alglib.net/optimization/scaling.php
        //
        // NOTE: for convex problems you may try using minqpsetscaleautodiag()
        //       which automatically determines variable scales.
        minqpsetscale(state, s);

        //
        // Solve problem with the sparse interior-point method (sparse IPM) solver.
        //
        // This solver is intended for large-scale sparse problems with box and linear
        // constraints, but it will work on such a toy problem too.
        //
        // Commercial ALGLIB can parallelize sparse Cholesky factorization which is the
        // most time-consuming part of the algorithm. See the ALGLIB Reference Manual for
        // more information on how to activate parallelism support.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgosparseipm(state, 0.0);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [3,2]

        //
        // Solve problem with dense IPM solver.
        //
        // This solver is optimized for problems with dense linear constraints and/or
        // dense quadratic term.
        //
        // Default stopping criteria are used.
        //
        minqpsetalgodenseipm(state, 0.0);
        minqpoptimize(state);
        minqpresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [3,2]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

modelerrors
multilayerperceptron
mlpactivationfunction
mlpallerrorssparsesubset
mlpallerrorssubset
mlpavgce
mlpavgcesparse
mlpavgerror
mlpavgerrorsparse
mlpavgrelerror
mlpavgrelerrorsparse
mlpclserror
mlpcopy
mlpcopytunableparameters
mlpcreate0
mlpcreate1
mlpcreate2
mlpcreateb0
mlpcreateb1
mlpcreateb2
mlpcreatec0
mlpcreatec1
mlpcreatec2
mlpcreater0
mlpcreater1
mlpcreater2
mlperror
mlperrorn
mlperrorsparse
mlperrorsparsesubset
mlperrorsubset
mlpgetinputscaling
mlpgetinputscount
mlpgetlayerscount
mlpgetlayersize
mlpgetneuroninfo
mlpgetoutputscaling
mlpgetoutputscount
mlpgetweight
mlpgetweightscount
mlpgrad
mlpgradbatch
mlpgradbatchsparse
mlpgradbatchsparsesubset
mlpgradbatchsubset
mlpgradn
mlpgradnbatch
mlphessianbatch
mlphessiannbatch
mlpinitpreprocessor
mlpissoftmax
mlpprocess
mlpprocessi
mlpproperties
mlprandomize
mlprandomizefull
mlprelclserror
mlprelclserrorsparse
mlprmserror
mlprmserrorsparse
mlpserialize
mlpsetinputscaling
mlpsetneuroninfo
mlpsetoutputscaling
mlpsetweight
mlpunserialize
/************************************************************************* Model's errors: * RelCLSError - fraction of misclassified cases. * AvgCE - acerage cross-entropy * RMSError - root-mean-square error * AvgError - average error * AvgRelError - average relative error NOTE 1: RelCLSError/AvgCE are zero on regression problems. NOTE 2: on classification problems RMSError/AvgError/AvgRelError contain errors in prediction of posterior probabilities *************************************************************************/
class modelerrors { public: modelerrors(); modelerrors(const modelerrors &rhs); modelerrors& operator=(const modelerrors &rhs); virtual ~modelerrors(); double relclserror; double avgce; double rmserror; double avgerror; double avgrelerror; };
/************************************************************************* *************************************************************************/
class multilayerperceptron { public: multilayerperceptron(); multilayerperceptron(const multilayerperceptron &rhs); multilayerperceptron& operator=(const multilayerperceptron &rhs); virtual ~multilayerperceptron(); };
/************************************************************************* Neural network activation function INPUT PARAMETERS: NET - neuron input K - function index (zero for linear function) OUTPUT PARAMETERS: F - function DF - its derivative D2F - its second derivative -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpactivationfunction(const double net, const ae_int_t k, double &f, double &df, double &d2f, const xparams _xparams = alglib::xdefault);
/************************************************************************* Calculation of all types of errors on subset of dataset. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - network initialized with one of the network creation funcs XY - original dataset given by sparse matrix; one sample = one row; first NIn columns contain inputs, next NOut columns - desired outputs. SetSize - real size of XY, SetSize>=0; Subset - subset of SubsetSize elements, array[SubsetSize]; SubsetSize- number of elements in Subset[] array: * if SubsetSize>0, rows of XY with indices Subset[0]... ...Subset[SubsetSize-1] are processed * if SubsetSize=0, zeros are returned * if SubsetSize<0, entire dataset is processed; Subset[] array is ignored in this case. OUTPUT PARAMETERS: Rep - it contains all type of errors. -- ALGLIB -- Copyright 04.09.2012 by Bochkanov Sergey *************************************************************************/
void mlpallerrorssparsesubset(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t setsize, const integer_1d_array &subset, const ae_int_t subsetsize, modelerrors &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Calculation of all types of errors on subset of dataset. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - network initialized with one of the network creation funcs XY - original dataset; one sample = one row; first NIn columns contain inputs, next NOut columns - desired outputs. SetSize - real size of XY, SetSize>=0; Subset - subset of SubsetSize elements, array[SubsetSize]; SubsetSize- number of elements in Subset[] array: * if SubsetSize>0, rows of XY with indices Subset[0]... ...Subset[SubsetSize-1] are processed * if SubsetSize=0, zeros are returned * if SubsetSize<0, entire dataset is processed; Subset[] array is ignored in this case. OUTPUT PARAMETERS: Rep - it contains all type of errors. -- ALGLIB -- Copyright 04.09.2012 by Bochkanov Sergey *************************************************************************/
void mlpallerrorssubset(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t setsize, const integer_1d_array &subset, const ae_int_t subsetsize, modelerrors &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average cross-entropy (in bits per element) on the test set. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format; NPoints - points count. RESULT: CrossEntropy/(NPoints*LN(2)). Zero if network solves regression task. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 08.01.2009 by Bochkanov Sergey *************************************************************************/
double mlpavgce(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average cross-entropy (in bits per element) on the test set given by sparse matrix. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format. This function checks correctness of the dataset (no NANs/INFs, class numbers are correct) and throws exception when incorrect dataset is passed. Sparse matrix must use CRS format for storage. NPoints - points count, >=0. RESULT: CrossEntropy/(NPoints*LN(2)). Zero if network solves regression task. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 9.08.2012 by Bochkanov Sergey *************************************************************************/
double mlpavgcesparse(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average absolute error on the test set. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format; NPoints - points count. RESULT: Its meaning for regression task is obvious. As for classification task, it means average error when estimating posterior probabilities. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 11.03.2008 by Bochkanov Sergey *************************************************************************/
double mlpavgerror(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average absolute error on the test set given by sparse matrix. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format. This function checks correctness of the dataset (no NANs/INFs, class numbers are correct) and throws exception when incorrect dataset is passed. Sparse matrix must use CRS format for storage. NPoints - points count, >=0. RESULT: Its meaning for regression task is obvious. As for classification task, it means average error when estimating posterior probabilities. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 09.08.2012 by Bochkanov Sergey *************************************************************************/
double mlpavgerrorsparse(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average relative error on the test set. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format; NPoints - points count. RESULT: Its meaning for regression task is obvious. As for classification task, it means average relative error when estimating posterior probability of belonging to the correct class. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 11.03.2008 by Bochkanov Sergey *************************************************************************/
double mlpavgrelerror(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average relative error on the test set given by sparse matrix. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format. This function checks correctness of the dataset (no NANs/INFs, class numbers are correct) and throws exception when incorrect dataset is passed. Sparse matrix must use CRS format for storage. NPoints - points count, >=0. RESULT: Its meaning for regression task is obvious. As for classification task, it means average relative error when estimating posterior probability of belonging to the correct class. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 09.08.2012 by Bochkanov Sergey *************************************************************************/
double mlpavgrelerrorsparse(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Classification error of the neural network on dataset. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format; NPoints - points count. RESULT: classification error (number of misclassified cases) DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
ae_int_t mlpclserror(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Copying of neural network INPUT PARAMETERS: Network1 - original OUTPUT PARAMETERS: Network2 - copy -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpcopy(const multilayerperceptron &network1, multilayerperceptron &network2, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function copies tunable parameters (weights/means/sigmas) from one network to another with same architecture. It performs some rudimentary checks that architectures are same, and throws exception if check fails. It is intended for fast copying of states between two network which are known to have same geometry. INPUT PARAMETERS: Network1 - source, must be correctly initialized Network2 - target, must have same architecture OUTPUT PARAMETERS: Network2 - network state is copied from source to target -- ALGLIB -- Copyright 20.06.2013 by Bochkanov Sergey *************************************************************************/
void mlpcopytunableparameters(const multilayerperceptron &network1, multilayerperceptron &network2, const xparams _xparams = alglib::xdefault);
/************************************************************************* Creates neural network with NIn inputs, NOut outputs, without hidden layers, with linear output layer. Network weights are filled with small random values. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpcreate0(const ae_int_t nin, const ae_int_t nout, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as MLPCreate0, but with one hidden layer (NHid neurons) with non-linear activation function. Output layer is linear. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpcreate1(const ae_int_t nin, const ae_int_t nhid, const ae_int_t nout, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as MLPCreate0, but with two hidden layers (NHid1 and NHid2 neurons) with non-linear activation function. Output layer is linear. $ALL -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpcreate2(const ae_int_t nin, const ae_int_t nhid1, const ae_int_t nhid2, const ae_int_t nout, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Creates neural network with NIn inputs, NOut outputs, without hidden layers with non-linear output layer. Network weights are filled with small random values. Activation function of the output layer takes values: (B, +INF), if D>=0 or (-INF, B), if D<0. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
void mlpcreateb0(const ae_int_t nin, const ae_int_t nout, const double b, const double d, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as MLPCreateB0 but with non-linear hidden layer. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
void mlpcreateb1(const ae_int_t nin, const ae_int_t nhid, const ae_int_t nout, const double b, const double d, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as MLPCreateB0 but with two non-linear hidden layers. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
void mlpcreateb2(const ae_int_t nin, const ae_int_t nhid1, const ae_int_t nhid2, const ae_int_t nout, const double b, const double d, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Creates classifier network with NIn inputs and NOut possible classes. Network contains no hidden layers and linear output layer with SOFTMAX- normalization (so outputs sums up to 1.0 and converge to posterior probabilities). -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpcreatec0(const ae_int_t nin, const ae_int_t nout, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as MLPCreateC0, but with one non-linear hidden layer. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpcreatec1(const ae_int_t nin, const ae_int_t nhid, const ae_int_t nout, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as MLPCreateC0, but with two non-linear hidden layers. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpcreatec2(const ae_int_t nin, const ae_int_t nhid1, const ae_int_t nhid2, const ae_int_t nout, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Creates neural network with NIn inputs, NOut outputs, without hidden layers with non-linear output layer. Network weights are filled with small random values. Activation function of the output layer takes values [A,B]. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
void mlpcreater0(const ae_int_t nin, const ae_int_t nout, const double a, const double b, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as MLPCreateR0, but with non-linear hidden layer. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
void mlpcreater1(const ae_int_t nin, const ae_int_t nhid, const ae_int_t nout, const double a, const double b, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as MLPCreateR0, but with two non-linear hidden layers. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
void mlpcreater2(const ae_int_t nin, const ae_int_t nhid1, const ae_int_t nhid2, const ae_int_t nout, const double a, const double b, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Error of the neural network on dataset. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format; NPoints - points count. RESULT: sum-of-squares error, SUM(sqr(y[i]-desired_y[i])/2) DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
double mlperror(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Natural error function for neural network, internal subroutine. NOTE: this function is single-threaded. Unlike other error function, it receives no speed-up from being executed in SMP mode. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
double mlperrorn(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t ssize, const xparams _xparams = alglib::xdefault);
/************************************************************************* Error of the neural network on dataset given by sparse matrix. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network XY - training set, see below for information on the training set format. This function checks correctness of the dataset (no NANs/INFs, class numbers are correct) and throws exception when incorrect dataset is passed. Sparse matrix must use CRS format for storage. NPoints - points count, >=0 RESULT: sum-of-squares error, SUM(sqr(y[i]-desired_y[i])/2) DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
double mlperrorsparse(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Error of the neural network on subset of sparse dataset. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format. This function checks correctness of the dataset (no NANs/INFs, class numbers are correct) and throws exception when incorrect dataset is passed. Sparse matrix must use CRS format for storage. SetSize - real size of XY, SetSize>=0; it is used when SubsetSize<0; Subset - subset of SubsetSize elements, array[SubsetSize]; SubsetSize- number of elements in Subset[] array: * if SubsetSize>0, rows of XY with indices Subset[0]... ...Subset[SubsetSize-1] are processed * if SubsetSize=0, zeros are returned * if SubsetSize<0, entire dataset is processed; Subset[] array is ignored in this case. RESULT: sum-of-squares error, SUM(sqr(y[i]-desired_y[i])/2) DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 04.09.2012 by Bochkanov Sergey *************************************************************************/
double mlperrorsparsesubset(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t setsize, const integer_1d_array &subset, const ae_int_t subsetsize, const xparams _xparams = alglib::xdefault);
/************************************************************************* Error of the neural network on subset of dataset. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format; SetSize - real size of XY, SetSize>=0; Subset - subset of SubsetSize elements, array[SubsetSize]; SubsetSize- number of elements in Subset[] array: * if SubsetSize>0, rows of XY with indices Subset[0]... ...Subset[SubsetSize-1] are processed * if SubsetSize=0, zeros are returned * if SubsetSize<0, entire dataset is processed; Subset[] array is ignored in this case. RESULT: sum-of-squares error, SUM(sqr(y[i]-desired_y[i])/2) DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 04.09.2012 by Bochkanov Sergey *************************************************************************/
double mlperrorsubset(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t setsize, const integer_1d_array &subset, const ae_int_t subsetsize, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns offset/scaling coefficients for I-th input of the network. INPUT PARAMETERS: Network - network I - input index OUTPUT PARAMETERS: Mean - mean term Sigma - sigma term, guaranteed to be nonzero. I-th input is passed through linear transformation IN[i] = (IN[i]-Mean)/Sigma before feeding to the network -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
void mlpgetinputscaling(const multilayerperceptron &network, const ae_int_t i, double &mean, double &sigma, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns number of inputs. -- ALGLIB -- Copyright 19.10.2011 by Bochkanov Sergey *************************************************************************/
ae_int_t mlpgetinputscount(const multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns total number of layers (including input, hidden and output layers). -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
ae_int_t mlpgetlayerscount(const multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns size of K-th layer. K=0 corresponds to input layer, K=CNT-1 corresponds to output layer. Size of the output layer is always equal to the number of outputs, although when we have softmax-normalized network, last neuron doesn't have any connections - it is just zero. -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
ae_int_t mlpgetlayersize(const multilayerperceptron &network, const ae_int_t k, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns information about Ith neuron of Kth layer INPUT PARAMETERS: Network - network K - layer index I - neuron index (within layer) OUTPUT PARAMETERS: FKind - activation function type (used by MLPActivationFunction()) this value is zero for input or linear neurons Threshold - also called offset, bias zero for input neurons NOTE: this function throws exception if layer or neuron with given index do not exists. -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
void mlpgetneuroninfo(multilayerperceptron &network, const ae_int_t k, const ae_int_t i, ae_int_t &fkind, double &threshold, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns offset/scaling coefficients for I-th output of the network. INPUT PARAMETERS: Network - network I - input index OUTPUT PARAMETERS: Mean - mean term Sigma - sigma term, guaranteed to be nonzero. I-th output is passed through linear transformation OUT[i] = OUT[i]*Sigma+Mean before returning it to user. In case we have SOFTMAX-normalized network, we return (Mean,Sigma)=(0.0,1.0). -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
void mlpgetoutputscaling(const multilayerperceptron &network, const ae_int_t i, double &mean, double &sigma, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns number of outputs. -- ALGLIB -- Copyright 19.10.2011 by Bochkanov Sergey *************************************************************************/
ae_int_t mlpgetoutputscount(const multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns information about connection from I0-th neuron of K0-th layer to I1-th neuron of K1-th layer. INPUT PARAMETERS: Network - network K0 - layer index I0 - neuron index (within layer) K1 - layer index I1 - neuron index (within layer) RESULT: connection weight (zero for non-existent connections) This function: 1. throws exception if layer or neuron with given index do not exists. 2. returns zero if neurons exist, but there is no connection between them -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
double mlpgetweight(multilayerperceptron &network, const ae_int_t k0, const ae_int_t i0, const ae_int_t k1, const ae_int_t i1, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns number of weights. -- ALGLIB -- Copyright 19.10.2011 by Bochkanov Sergey *************************************************************************/
ae_int_t mlpgetweightscount(const multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Gradient calculation INPUT PARAMETERS: Network - network initialized with one of the network creation funcs X - input vector, length of array must be at least NIn DesiredY- desired outputs, length of array must be at least NOut Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, SUM(sqr(y[i]-desiredy[i])/2,i) Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpgrad(multilayerperceptron &network, const real_1d_array &x, const real_1d_array &desiredy, double &e, real_1d_array &grad, const xparams _xparams = alglib::xdefault);
/************************************************************************* Batch gradient calculation for a set of inputs/outputs ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - network initialized with one of the network creation funcs XY - original dataset in dense format; one sample = one row: * first NIn columns contain inputs, * for regression problem, next NOut columns store desired outputs. * for classification problem, next column (just one!) stores class number. SSize - number of elements in XY Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, SUM(sqr(y[i]-desiredy[i])/2,i) Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpgradbatch(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t ssize, double &e, real_1d_array &grad, const xparams _xparams = alglib::xdefault);
/************************************************************************* Batch gradient calculation for a set of inputs/outputs given by sparse matrices ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - network initialized with one of the network creation funcs XY - original dataset in sparse format; one sample = one row: * MATRIX MUST BE STORED IN CRS FORMAT * first NIn columns contain inputs. * for regression problem, next NOut columns store desired outputs. * for classification problem, next column (just one!) stores class number. SSize - number of elements in XY Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, SUM(sqr(y[i]-desiredy[i])/2,i) Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 26.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpgradbatchsparse(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t ssize, double &e, real_1d_array &grad, const xparams _xparams = alglib::xdefault);
/************************************************************************* Batch gradient calculation for a set of inputs/outputs for a subset of dataset given by set of indexes. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - network initialized with one of the network creation funcs XY - original dataset in sparse format; one sample = one row: * MATRIX MUST BE STORED IN CRS FORMAT * first NIn columns contain inputs, * for regression problem, next NOut columns store desired outputs. * for classification problem, next column (just one!) stores class number. SetSize - real size of XY, SetSize>=0; Idx - subset of SubsetSize elements, array[SubsetSize]: * Idx[I] stores row index in the original dataset which is given by XY. Gradient is calculated with respect to rows whose indexes are stored in Idx[]. * Idx[] must store correct indexes; this function throws an exception in case incorrect index (less than 0 or larger than rows(XY)) is given * Idx[] may store indexes in any order and even with repetitions. SubsetSize- number of elements in Idx[] array: * positive value means that subset given by Idx[] is processed * zero value results in zero gradient * negative value means that full dataset is processed Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, SUM(sqr(y[i]-desiredy[i])/2,i) Grad - gradient of E with respect to weights of network, array[WCount] NOTE: when SubsetSize<0 is used full dataset by call MLPGradBatchSparse function. -- ALGLIB -- Copyright 26.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpgradbatchsparsesubset(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t setsize, const integer_1d_array &idx, const ae_int_t subsetsize, double &e, real_1d_array &grad, const xparams _xparams = alglib::xdefault);
/************************************************************************* Batch gradient calculation for a subset of dataset ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - network initialized with one of the network creation funcs XY - original dataset in dense format; one sample = one row: * first NIn columns contain inputs, * for regression problem, next NOut columns store desired outputs. * for classification problem, next column (just one!) stores class number. SetSize - real size of XY, SetSize>=0; Idx - subset of SubsetSize elements, array[SubsetSize]: * Idx[I] stores row index in the original dataset which is given by XY. Gradient is calculated with respect to rows whose indexes are stored in Idx[]. * Idx[] must store correct indexes; this function throws an exception in case incorrect index (less than 0 or larger than rows(XY)) is given * Idx[] may store indexes in any order and even with repetitions. SubsetSize- number of elements in Idx[] array: * positive value means that subset given by Idx[] is processed * zero value results in zero gradient * negative value means that full dataset is processed Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, SUM(sqr(y[i]-desiredy[i])/2,i) Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 26.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpgradbatchsubset(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t setsize, const integer_1d_array &idx, const ae_int_t subsetsize, double &e, real_1d_array &grad, const xparams _xparams = alglib::xdefault);
/************************************************************************* Gradient calculation (natural error function is used) INPUT PARAMETERS: Network - network initialized with one of the network creation funcs X - input vector, length of array must be at least NIn DesiredY- desired outputs, length of array must be at least NOut Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, sum-of-squares for regression networks, cross-entropy for classification networks. Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpgradn(multilayerperceptron &network, const real_1d_array &x, const real_1d_array &desiredy, double &e, real_1d_array &grad, const xparams _xparams = alglib::xdefault);
/************************************************************************* Batch gradient calculation for a set of inputs/outputs (natural error function is used) INPUT PARAMETERS: Network - network initialized with one of the network creation funcs XY - set of inputs/outputs; one sample = one row; first NIn columns contain inputs, next NOut columns - desired outputs. SSize - number of elements in XY Grad - possibly preallocated array. If size of array is smaller than WCount, it will be reallocated. It is recommended to reuse previously allocated array to reduce allocation overhead. OUTPUT PARAMETERS: E - error function, sum-of-squares for regression networks, cross-entropy for classification networks. Grad - gradient of E with respect to weights of network, array[WCount] -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpgradnbatch(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t ssize, double &e, real_1d_array &grad, const xparams _xparams = alglib::xdefault);
/************************************************************************* Batch Hessian calculation using R-algorithm. Internal subroutine. -- ALGLIB -- Copyright 26.01.2008 by Bochkanov Sergey. Hessian calculation based on R-algorithm described in "Fast Exact Multiplication by the Hessian", B. A. Pearlmutter, Neural Computation, 1994. *************************************************************************/
void mlphessianbatch(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t ssize, double &e, real_1d_array &grad, real_2d_array &h, const xparams _xparams = alglib::xdefault);
/************************************************************************* Batch Hessian calculation (natural error function) using R-algorithm. Internal subroutine. -- ALGLIB -- Copyright 26.01.2008 by Bochkanov Sergey. Hessian calculation based on R-algorithm described in "Fast Exact Multiplication by the Hessian", B. A. Pearlmutter, Neural Computation, 1994. *************************************************************************/
void mlphessiannbatch(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t ssize, double &e, real_1d_array &grad, real_2d_array &h, const xparams _xparams = alglib::xdefault);
/************************************************************************* Internal subroutine. -- ALGLIB -- Copyright 30.03.2008 by Bochkanov Sergey *************************************************************************/
void mlpinitpreprocessor(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t ssize, const xparams _xparams = alglib::xdefault);
/************************************************************************* Tells whether network is SOFTMAX-normalized (i.e. classifier) or not. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
bool mlpissoftmax(const multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Procesing INPUT PARAMETERS: Network - neural network X - input vector, array[0..NIn-1]. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. See also MLPProcessI -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpprocess(multilayerperceptron &network, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* 'interactive' variant of MLPProcess for languages like Python which support constructs like "Y = MLPProcess(NN,X)" and interactive mode of the interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 21.09.2010 by Bochkanov Sergey *************************************************************************/
void mlpprocessi(multilayerperceptron &network, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns information about initialized network: number of inputs, outputs, weights. -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
void mlpproperties(const multilayerperceptron &network, ae_int_t &nin, ae_int_t &nout, ae_int_t &wcount, const xparams _xparams = alglib::xdefault);
/************************************************************************* Randomization of neural network weights -- ALGLIB -- Copyright 06.11.2007 by Bochkanov Sergey *************************************************************************/
void mlprandomize(multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Randomization of neural network weights and standartisator -- ALGLIB -- Copyright 10.03.2008 by Bochkanov Sergey *************************************************************************/
void mlprandomizefull(multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Relative classification error on the test set. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format; NPoints - points count. RESULT: Percent of incorrectly classified cases. Works both for classifier networks and general purpose networks used as classifiers. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 25.12.2008 by Bochkanov Sergey *************************************************************************/
double mlprelclserror(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Relative classification error on the test set given by sparse matrix. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format. Sparse matrix must use CRS format for storage. NPoints - points count, >=0. RESULT: Percent of incorrectly classified cases. Works both for classifier networks and general purpose networks used as classifiers. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 09.08.2012 by Bochkanov Sergey *************************************************************************/
double mlprelclserrorsparse(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* RMS error on the test set given. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format; NPoints - points count. RESULT: Root mean square error. Its meaning for regression task is obvious. As for classification task, RMS error means error when estimating posterior probabilities. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 04.11.2007 by Bochkanov Sergey *************************************************************************/
double mlprmserror(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* RMS error on the test set given by sparse matrix. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: Network - neural network; XY - training set, see below for information on the training set format. This function checks correctness of the dataset (no NANs/INFs, class numbers are correct) and throws exception when incorrect dataset is passed. Sparse matrix must use CRS format for storage. NPoints - points count, >=0. RESULT: Root mean square error. Its meaning for regression task is obvious. As for classification task, RMS error means error when estimating posterior probabilities. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following dataset format is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 09.08.2012 by Bochkanov Sergey *************************************************************************/
double mlprmserrorsparse(multilayerperceptron &network, const sparsematrix &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void mlpserialize(const multilayerperceptron &obj, std::string &s_out); void mlpserialize(const multilayerperceptron &obj, std::ostream &s_out);
/************************************************************************* This function sets offset/scaling coefficients for I-th input of the network. INPUT PARAMETERS: Network - network I - input index Mean - mean term Sigma - sigma term (if zero, will be replaced by 1.0) NTE: I-th input is passed through linear transformation IN[i] = (IN[i]-Mean)/Sigma before feeding to the network. This function sets Mean and Sigma. -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
void mlpsetinputscaling(multilayerperceptron &network, const ae_int_t i, const double mean, const double sigma, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function modifies information about Ith neuron of Kth layer INPUT PARAMETERS: Network - network K - layer index I - neuron index (within layer) FKind - activation function type (used by MLPActivationFunction()) this value must be zero for input neurons (you can not set activation function for input neurons) Threshold - also called offset, bias this value must be zero for input neurons (you can not set threshold for input neurons) NOTES: 1. this function throws exception if layer or neuron with given index do not exists. 2. this function also throws exception when you try to set non-linear activation function for input neurons (any kind of network) or for output neurons of classifier network. 3. this function throws exception when you try to set non-zero threshold for input neurons (any kind of network). -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
void mlpsetneuroninfo(multilayerperceptron &network, const ae_int_t k, const ae_int_t i, const ae_int_t fkind, const double threshold, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets offset/scaling coefficients for I-th output of the network. INPUT PARAMETERS: Network - network I - input index Mean - mean term Sigma - sigma term (if zero, will be replaced by 1.0) OUTPUT PARAMETERS: NOTE: I-th output is passed through linear transformation OUT[i] = OUT[i]*Sigma+Mean before returning it to user. This function sets Sigma/Mean. In case we have SOFTMAX-normalized network, you can not set (Sigma,Mean) to anything other than(0.0,1.0) - this function will throw exception. -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
void mlpsetoutputscaling(multilayerperceptron &network, const ae_int_t i, const double mean, const double sigma, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function modifies information about connection from I0-th neuron of K0-th layer to I1-th neuron of K1-th layer. INPUT PARAMETERS: Network - network K0 - layer index I0 - neuron index (within layer) K1 - layer index I1 - neuron index (within layer) W - connection weight (must be zero for non-existent connections) This function: 1. throws exception if layer or neuron with given index do not exists. 2. throws exception if you try to set non-zero weight for non-existent connection -- ALGLIB -- Copyright 25.03.2011 by Bochkanov Sergey *************************************************************************/
void mlpsetweight(multilayerperceptron &network, const ae_int_t k0, const ae_int_t i0, const ae_int_t k1, const ae_int_t i1, const double w, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void mlpunserialize(const std::string &s_in, multilayerperceptron &obj); void mlpunserialize(const std::istream &s_in, multilayerperceptron &obj);
mlpensemble
mlpeavgce
mlpeavgerror
mlpeavgrelerror
mlpecreate0
mlpecreate1
mlpecreate2
mlpecreateb0
mlpecreateb1
mlpecreateb2
mlpecreatec0
mlpecreatec1
mlpecreatec2
mlpecreatefromnetwork
mlpecreater0
mlpecreater1
mlpecreater2
mlpeissoftmax
mlpeprocess
mlpeprocessi
mlpeproperties
mlperandomize
mlperelclserror
mlpermserror
mlpeserialize
mlpeunserialize
/************************************************************************* Neural networks ensemble *************************************************************************/
class mlpensemble { public: mlpensemble(); mlpensemble(const mlpensemble &rhs); mlpensemble& operator=(const mlpensemble &rhs); virtual ~mlpensemble(); };
/************************************************************************* Average cross-entropy (in bits per element) on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: CrossEntropy/(NPoints*LN(2)). Zero if ensemble solves regression task. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
double mlpeavgce(mlpensemble &ensemble, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task it means average error when estimating posterior probabilities. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
double mlpeavgerror(mlpensemble &ensemble, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Average relative error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: Its meaning for regression task is obvious. As for classification task it means average relative error when estimating posterior probabilities. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
double mlpeavgrelerror(mlpensemble &ensemble, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreate0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreate0(const ae_int_t nin, const ae_int_t nout, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreate1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreate1(const ae_int_t nin, const ae_int_t nhid, const ae_int_t nout, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreate2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreate2(const ae_int_t nin, const ae_int_t nhid1, const ae_int_t nhid2, const ae_int_t nout, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreateB0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreateb0(const ae_int_t nin, const ae_int_t nout, const double b, const double d, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreateB1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreateb1(const ae_int_t nin, const ae_int_t nhid, const ae_int_t nout, const double b, const double d, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreateB2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreateb2(const ae_int_t nin, const ae_int_t nhid1, const ae_int_t nhid2, const ae_int_t nout, const double b, const double d, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreateC0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreatec0(const ae_int_t nin, const ae_int_t nout, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreateC1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreatec1(const ae_int_t nin, const ae_int_t nhid, const ae_int_t nout, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreateC2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreatec2(const ae_int_t nin, const ae_int_t nhid1, const ae_int_t nhid2, const ae_int_t nout, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Creates ensemble from network. Only network geometry is copied. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreatefromnetwork(const multilayerperceptron &network, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreateR0, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreater0(const ae_int_t nin, const ae_int_t nout, const double a, const double b, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreateR1, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreater1(const ae_int_t nin, const ae_int_t nhid, const ae_int_t nout, const double a, const double b, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Like MLPCreateR2, but for ensembles. -- ALGLIB -- Copyright 18.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpecreater2(const ae_int_t nin, const ae_int_t nhid1, const ae_int_t nhid2, const ae_int_t nout, const double a, const double b, const ae_int_t ensemblesize, mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Return normalization type (whether ensemble is SOFTMAX-normalized or not). -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
bool mlpeissoftmax(const mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Procesing INPUT PARAMETERS: Ensemble- neural networks ensemble X - input vector, array[0..NIn-1]. Y - (possibly) preallocated buffer; if size of Y is less than NOut, it will be reallocated. If it is large enough, it is NOT reallocated, so we can save some time on reallocation. OUTPUT PARAMETERS: Y - result. Regression estimate when solving regression task, vector of posterior probabilities for classification task. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpeprocess(mlpensemble &ensemble, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* 'interactive' variant of MLPEProcess for languages like Python which support constructs like "Y = MLPEProcess(LM,X)" and interactive mode of the interpreter This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpeprocessi(mlpensemble &ensemble, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* Return ensemble properties (number of inputs and outputs). -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpeproperties(const mlpensemble &ensemble, ae_int_t &nin, ae_int_t &nout, const xparams _xparams = alglib::xdefault);
/************************************************************************* Randomization of MLP ensemble -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
void mlperandomize(mlpensemble &ensemble, const xparams _xparams = alglib::xdefault);
/************************************************************************* Relative classification error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: percent of incorrectly classified cases. Works both for classifier betwork and for regression networks which are used as classifiers. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
double mlperelclserror(mlpensemble &ensemble, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* RMS error on the test set INPUT PARAMETERS: Ensemble- ensemble XY - test set NPoints - test set size RESULT: root mean square error. Its meaning for regression task is obvious. As for classification task RMS error means error when estimating posterior probabilities. -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
double mlpermserror(mlpensemble &ensemble, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void mlpeserialize(const mlpensemble &obj, std::string &s_out); void mlpeserialize(const mlpensemble &obj, std::ostream &s_out);
/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void mlpeunserialize(const std::string &s_in, mlpensemble &obj); void mlpeunserialize(const std::istream &s_in, mlpensemble &obj);
mlpcvreport
mlpreport
mlptrainer
mlpcontinuetraining
mlpcreatetrainer
mlpcreatetrainercls
mlpebagginglbfgs
mlpebagginglm
mlpetraines
mlpkfoldcv
mlpkfoldcvlbfgs
mlpkfoldcvlm
mlpsetalgobatch
mlpsetcond
mlpsetdataset
mlpsetdecay
mlpsetsparsedataset
mlpstarttraining
mlptrainensemblees
mlptraines
mlptrainlbfgs
mlptrainlm
mlptrainnetwork
nn_cls2 Binary classification problem
nn_cls3 Multiclass classification problem
nn_crossvalidation Cross-validation
nn_ensembles_es Early stopping ensembles
nn_parallel Parallel training
nn_regr Regression problem with one output (2=>1)
nn_regr_n Regression problem with multiple outputs (2=>2)
nn_trainerobject Advanced example on trainer object
/************************************************************************* Cross-validation estimates of generalization error *************************************************************************/
class mlpcvreport { public: mlpcvreport(); mlpcvreport(const mlpcvreport &rhs); mlpcvreport& operator=(const mlpcvreport &rhs); virtual ~mlpcvreport(); double relclserror; double avgce; double rmserror; double avgerror; double avgrelerror; };
/************************************************************************* Training report: * RelCLSError - fraction of misclassified cases. * AvgCE - acerage cross-entropy * RMSError - root-mean-square error * AvgError - average error * AvgRelError - average relative error * NGrad - number of gradient calculations * NHess - number of Hessian calculations * NCholesky - number of Cholesky decompositions NOTE 1: RelCLSError/AvgCE are zero on regression problems. NOTE 2: on classification problems RMSError/AvgError/AvgRelError contain errors in prediction of posterior probabilities *************************************************************************/
class mlpreport { public: mlpreport(); mlpreport(const mlpreport &rhs); mlpreport& operator=(const mlpreport &rhs); virtual ~mlpreport(); double relclserror; double avgce; double rmserror; double avgerror; double avgrelerror; ae_int_t ngrad; ae_int_t nhess; ae_int_t ncholesky; };
/************************************************************************* Trainer object for neural network. You should not try to access fields of this object directly - use ALGLIB functions to work with this object. *************************************************************************/
class mlptrainer { public: mlptrainer(); mlptrainer(const mlptrainer &rhs); mlptrainer& operator=(const mlptrainer &rhs); virtual ~mlptrainer(); };
/************************************************************************* IMPORTANT: this is an "expert" version of the MLPTrain() function. We do not recommend you to use it unless you are pretty sure that you need ability to monitor training progress. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. This function performs step-by-step training of the neural network. Here "step-by-step" means that training starts with MLPStartTraining() call, and then user subsequently calls MLPContinueTraining() to perform one more iteration of the training. This function performs one more iteration of the training and returns either True (training continues) or False (training stopped). In case True was returned, Network weights are updated according to the current state of the optimization progress. In case False was returned, no additional updates is performed (previous update of the network weights moved us to the final point, and no additional updates is needed). EXAMPLE: > > [initialize network and trainer object] > > MLPStartTraining(Trainer, Network, True) > while MLPContinueTraining(Trainer, Network) do > [visualize training progress] > INPUT PARAMETERS: S - trainer object Network - neural network structure, which is used to store current state of the training process. OUTPUT PARAMETERS: Network - weights of the neural network are rewritten by the current approximation. NOTE: this method uses sum-of-squares error function for training. NOTE: it is expected that trainer object settings are NOT changed during step-by-step training, i.e. no one changes stopping criteria or training set during training. It is possible and there is no defense against such actions, but algorithm behavior in such cases is undefined and can be unpredictable. NOTE: It is expected that Network is the same one which was passed to MLPStartTraining() function. However, THIS function checks only following: * that number of network inputs is consistent with trainer object settings * that number of network outputs/classes is consistent with trainer object settings * that number of network weights is the same as number of weights in the network passed to MLPStartTraining() function Exception is thrown when these conditions are violated. It is also expected that you do not change state of the network on your own - the only party who has right to change network during its training is a trainer object. Any attempt to interfere with trainer may lead to unpredictable results. -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
bool mlpcontinuetraining(mlptrainer &s, multilayerperceptron &network, const xparams _xparams = alglib::xdefault);
/************************************************************************* Creation of the network trainer object for regression networks INPUT PARAMETERS: NIn - number of inputs, NIn>=1 NOut - number of outputs, NOut>=1 OUTPUT PARAMETERS: S - neural network trainer object. This structure can be used to train any regression network with NIn inputs and NOut outputs. -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpcreatetrainer(const ae_int_t nin, const ae_int_t nout, mlptrainer &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

/************************************************************************* Creation of the network trainer object for classification networks INPUT PARAMETERS: NIn - number of inputs, NIn>=1 NClasses - number of classes, NClasses>=2 OUTPUT PARAMETERS: S - neural network trainer object. This structure can be used to train any classification network with NIn inputs and NOut outputs. -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpcreatetrainercls(const ae_int_t nin, const ae_int_t nclasses, mlptrainer &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* Training neural networks ensemble using bootstrap aggregating (bagging). L-BFGS algorithm is used as base training method. INPUT PARAMETERS: Ensemble - model with initialized geometry XY - training set NPoints - training set size Decay - weight decay coefficient, >=0.001 Restarts - restarts, >0. WStep - stopping criterion, same as in MLPTrainLBFGS MaxIts - stopping criterion, same as in MLPTrainLBFGS OUTPUT PARAMETERS: Ensemble - trained model Info - return code: * -8, if both WStep=0 and MaxIts=0 * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report. OOBErrors - out-of-bag generalization error estimate -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpebagginglbfgs(mlpensemble &ensemble, const real_2d_array &xy, const ae_int_t npoints, const double decay, const ae_int_t restarts, const double wstep, const ae_int_t maxits, ae_int_t &info, mlpreport &rep, mlpcvreport &ooberrors, const xparams _xparams = alglib::xdefault);
/************************************************************************* Training neural networks ensemble using bootstrap aggregating (bagging). Modified Levenberg-Marquardt algorithm is used as base training method. INPUT PARAMETERS: Ensemble - model with initialized geometry XY - training set NPoints - training set size Decay - weight decay coefficient, >=0.001 Restarts - restarts, >0. OUTPUT PARAMETERS: Ensemble - trained model Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report. OOBErrors - out-of-bag generalization error estimate -- ALGLIB -- Copyright 17.02.2009 by Bochkanov Sergey *************************************************************************/
void mlpebagginglm(mlpensemble &ensemble, const real_2d_array &xy, const ae_int_t npoints, const double decay, const ae_int_t restarts, ae_int_t &info, mlpreport &rep, mlpcvreport &ooberrors, const xparams _xparams = alglib::xdefault);
/************************************************************************* Training neural networks ensemble using early stopping. INPUT PARAMETERS: Ensemble - model with initialized geometry XY - training set NPoints - training set size Decay - weight decay coefficient, >=0.001 Restarts - restarts, >0. OUTPUT PARAMETERS: Ensemble - trained model Info - return code: * -2, if there is a point with class number outside of [0..NClasses-1]. * -1, if incorrect parameters was passed (NPoints<0, Restarts<1). * 6, if task has been solved. Rep - training report. OOBErrors - out-of-bag generalization error estimate -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
void mlpetraines(mlpensemble &ensemble, const real_2d_array &xy, const ae_int_t npoints, const double decay, const ae_int_t restarts, ae_int_t &info, mlpreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function estimates generalization error using cross-validation on the current dataset with current training settings. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: S - trainer object Network - neural network. It must have same number of inputs and output/classes as was specified during creation of the trainer object. Network is not changed during cross- validation and is not trained - it is used only as representative of its architecture. I.e., we estimate generalization properties of ARCHITECTURE, not some specific network. NRestarts - number of restarts, >=0: * NRestarts>0 means that for each cross-validation round specified number of random restarts is performed, with best network being chosen after training. * NRestarts=0 is same as NRestarts=1 FoldsCount - number of folds in k-fold cross-validation: * 2<=FoldsCount<=size of dataset * recommended value: 10. * values larger than dataset size will be silently truncated down to dataset size OUTPUT PARAMETERS: Rep - structure which contains cross-validation estimates: * Rep.RelCLSError - fraction of misclassified cases. * Rep.AvgCE - acerage cross-entropy * Rep.RMSError - root-mean-square error * Rep.AvgError - average error * Rep.AvgRelError - average relative error NOTE: when no dataset was specified with MLPSetDataset/SetSparseDataset(), or subset with only one point was given, zeros are returned as estimates. NOTE: this method performs FoldsCount cross-validation rounds, each one with NRestarts random starts. Thus, FoldsCount*NRestarts networks are trained in total. NOTE: Rep.RelCLSError/Rep.AvgCE are zero on regression problems. NOTE: on classification problems Rep.RMSError/Rep.AvgError/Rep.AvgRelError contain errors in prediction of posterior probabilities. -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpkfoldcv(mlptrainer &s, const multilayerperceptron &network, const ae_int_t nrestarts, const ae_int_t foldscount, mlpreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* Cross-validation estimate of generalization error. Base algorithm - L-BFGS. INPUT PARAMETERS: Network - neural network with initialized geometry. Network is not changed during cross-validation - it is used only as a representative of its architecture. XY - training set. SSize - training set size Decay - weight decay, same as in MLPTrainLBFGS Restarts - number of restarts, >0. restarts are counted for each partition separately, so total number of restarts will be Restarts*FoldsCount. WStep - stopping criterion, same as in MLPTrainLBFGS MaxIts - stopping criterion, same as in MLPTrainLBFGS FoldsCount - number of folds in k-fold cross-validation, 2<=FoldsCount<=SSize. recommended value: 10. OUTPUT PARAMETERS: Info - return code, same as in MLPTrainLBFGS Rep - report, same as in MLPTrainLM/MLPTrainLBFGS CVRep - generalization error estimates -- ALGLIB -- Copyright 09.12.2007 by Bochkanov Sergey *************************************************************************/
void mlpkfoldcvlbfgs(const multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const double decay, const ae_int_t restarts, const double wstep, const ae_int_t maxits, const ae_int_t foldscount, ae_int_t &info, mlpreport &rep, mlpcvreport &cvrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Cross-validation estimate of generalization error. Base algorithm - Levenberg-Marquardt. INPUT PARAMETERS: Network - neural network with initialized geometry. Network is not changed during cross-validation - it is used only as a representative of its architecture. XY - training set. SSize - training set size Decay - weight decay, same as in MLPTrainLBFGS Restarts - number of restarts, >0. restarts are counted for each partition separately, so total number of restarts will be Restarts*FoldsCount. FoldsCount - number of folds in k-fold cross-validation, 2<=FoldsCount<=SSize. recommended value: 10. OUTPUT PARAMETERS: Info - return code, same as in MLPTrainLBFGS Rep - report, same as in MLPTrainLM/MLPTrainLBFGS CVRep - generalization error estimates -- ALGLIB -- Copyright 09.12.2007 by Bochkanov Sergey *************************************************************************/
void mlpkfoldcvlm(const multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const double decay, const ae_int_t restarts, const ae_int_t foldscount, ae_int_t &info, mlpreport &rep, mlpcvreport &cvrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets training algorithm: batch training using L-BFGS will be used. This algorithm: * the most robust for small-scale problems, but may be too slow for large scale ones. * perfoms full pass through the dataset before performing step * uses conditions specified by MLPSetCond() for stopping * is default one used by trainer object INPUT PARAMETERS: S - trainer object -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpsetalgobatch(mlptrainer &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping criteria for the optimizer. INPUT PARAMETERS: S - trainer object WStep - stopping criterion. Algorithm stops if step size is less than WStep. Recommended value - 0.01. Zero step size means stopping after MaxIts iterations. WStep>=0. MaxIts - stopping criterion. Algorithm stops after MaxIts epochs (full passes over entire dataset). Zero MaxIts means stopping when step is sufficiently small. MaxIts>=0. NOTE: by default, WStep=0.005 and MaxIts=0 are used. These values are also used when MLPSetCond() is called with WStep=0 and MaxIts=0. NOTE: these stopping criteria are used for all kinds of neural training - from "conventional" networks to early stopping ensembles. When used for "conventional" networks, they are used as the only stopping criteria. When combined with early stopping, they used as ADDITIONAL stopping criteria which can terminate early stopping algorithm. -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpsetcond(mlptrainer &s, const double wstep, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets "current dataset" of the trainer object to one passed by user. INPUT PARAMETERS: S - trainer object XY - training set, see below for information on the training set format. This function checks correctness of the dataset (no NANs/INFs, class numbers are correct) and throws exception when incorrect dataset is passed. NPoints - points count, >=0. DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following datasetformat is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpsetdataset(mlptrainer &s, const real_2d_array &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  [7]  [8]  

/************************************************************************* This function sets weight decay coefficient which is used for training. INPUT PARAMETERS: S - trainer object Decay - weight decay coefficient, >=0. Weight decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 1.0E-3. Weight decay can be set to zero, in this case network is trained without weight decay. NOTE: by default network uses some small nonzero value for weight decay. -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpsetdecay(mlptrainer &s, const double decay, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets "current dataset" of the trainer object to one passed by user (sparse matrix is used to store dataset). INPUT PARAMETERS: S - trainer object XY - training set, see below for information on the training set format. This function checks correctness of the dataset (no NANs/INFs, class numbers are correct) and throws exception when incorrect dataset is passed. Any sparse storage format can be used: Hash-table, CRS... NPoints - points count, >=0 DATASET FORMAT: This function uses two different dataset formats - one for regression networks, another one for classification networks. For regression networks with NIn inputs and NOut outputs following dataset format is used: * dataset is given by NPoints*(NIn+NOut) matrix * each row corresponds to one example * first NIn columns are inputs, next NOut columns are outputs For classification networks with NIn inputs and NClasses clases following datasetformat is used: * dataset is given by NPoints*(NIn+1) matrix * each row corresponds to one example * first NIn columns are inputs, last column stores class number (from 0 to NClasses-1). -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpsetsparsedataset(mlptrainer &s, const sparsematrix &xy, const ae_int_t npoints, const xparams _xparams = alglib::xdefault);
/************************************************************************* IMPORTANT: this is an "expert" version of the MLPTrain() function. We do not recommend you to use it unless you are pretty sure that you need ability to monitor training progress. This function performs step-by-step training of the neural network. Here "step-by-step" means that training starts with MLPStartTraining() call, and then user subsequently calls MLPContinueTraining() to perform one more iteration of the training. After call to this function trainer object remembers network and is ready to train it. However, no training is performed until first call to MLPContinueTraining() function. Subsequent calls to MLPContinueTraining() will advance training progress one iteration further. EXAMPLE: > > ...initialize network and trainer object.... > > MLPStartTraining(Trainer, Network, True) > while MLPContinueTraining(Trainer, Network) do > ...visualize training progress... > INPUT PARAMETERS: S - trainer object Network - neural network. It must have same number of inputs and output/classes as was specified during creation of the trainer object. RandomStart - randomize network before training or not: * True means that network is randomized and its initial state (one which was passed to the trainer object) is lost. * False means that training is started from the current state of the network OUTPUT PARAMETERS: Network - neural network which is ready to training (weights are initialized, preprocessor is initialized using current training set) NOTE: this method uses sum-of-squares error function for training. NOTE: it is expected that trainer object settings are NOT changed during step-by-step training, i.e. no one changes stopping criteria or training set during training. It is possible and there is no defense against such actions, but algorithm behavior in such cases is undefined and can be unpredictable. -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlpstarttraining(mlptrainer &s, multilayerperceptron &network, const bool randomstart, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function trains neural network ensemble passed to this function using current dataset and early stopping training algorithm. Each early stopping round performs NRestarts random restarts (thus, EnsembleSize*NRestarts training rounds is performed in total). ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: S - trainer object; Ensemble - neural network ensemble. It must have same number of inputs and outputs/classes as was specified during creation of the trainer object. NRestarts - number of restarts, >=0: * NRestarts>0 means that specified number of random restarts are performed during each ES round; * NRestarts=0 is silently replaced by 1. OUTPUT PARAMETERS: Ensemble - trained ensemble; Rep - it contains all type of errors. NOTE: this training method uses BOTH early stopping and weight decay! So, you should select weight decay before starting training just as you select it before training "conventional" networks. NOTE: when no dataset was specified with MLPSetDataset/SetSparseDataset(), or single-point dataset was passed, ensemble is filled by zero values. NOTE: this method uses sum-of-squares error function for training. -- ALGLIB -- Copyright 22.08.2012 by Bochkanov Sergey *************************************************************************/
void mlptrainensemblees(mlptrainer &s, mlpensemble &ensemble, const ae_int_t nrestarts, mlpreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* Neural network training using early stopping (base algorithm - L-BFGS with regularization). INPUT PARAMETERS: Network - neural network with initialized geometry TrnXY - training set TrnSize - training set size, TrnSize>0 ValXY - validation set ValSize - validation set size, ValSize>0 Decay - weight decay constant, >=0.001 Decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 0.001. Restarts - number of restarts, either: * strictly positive number - algorithm make specified number of restarts from random position. * -1, in which case algorithm makes exactly one run from the initial state of the network (no randomization). If you don't know what Restarts to choose, choose one one the following: * -1 (deterministic start) * +1 (one random restart) * +5 (moderate amount of random restarts) OUTPUT PARAMETERS: Network - trained neural network. Info - return code: * -2, if there is a point with class number outside of [0..NOut-1]. * -1, if wrong parameters specified (NPoints<0, Restarts<1, ...). * 2, task has been solved, stopping criterion met - sufficiently small step size. Not expected (we use EARLY stopping) but possible and not an error. * 6, task has been solved, stopping criterion met - increasing of validation set error. Rep - training report NOTE: Algorithm stops if validation set error increases for a long enough or step size is small enought (there are task where validation set may decrease for eternity). In any case solution returned corresponds to the minimum of validation set error. -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
void mlptraines(multilayerperceptron &network, const real_2d_array &trnxy, const ae_int_t trnsize, const real_2d_array &valxy, const ae_int_t valsize, const double decay, const ae_int_t restarts, ae_int_t &info, mlpreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Neural network training using L-BFGS algorithm with regularization. Subroutine trains neural network with restarts from random positions. Algorithm is well suited for problems of any dimensionality (memory requirements and step complexity are linear by weights number). INPUT PARAMETERS: Network - neural network with initialized geometry XY - training set NPoints - training set size Decay - weight decay constant, >=0.001 Decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 0.001. Restarts - number of restarts from random position, >0. If you don't know what Restarts to choose, use 2. WStep - stopping criterion. Algorithm stops if step size is less than WStep. Recommended value - 0.01. Zero step size means stopping after MaxIts iterations. MaxIts - stopping criterion. Algorithm stops after MaxIts iterations (NOT gradient calculations). Zero MaxIts means stopping when step is sufficiently small. OUTPUT PARAMETERS: Network - trained neural network. Info - return code: * -8, if both WStep=0 and MaxIts=0 * -2, if there is a point with class number outside of [0..NOut-1]. * -1, if wrong parameters specified (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report -- ALGLIB -- Copyright 09.12.2007 by Bochkanov Sergey *************************************************************************/
void mlptrainlbfgs(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const double decay, const ae_int_t restarts, const double wstep, const ae_int_t maxits, ae_int_t &info, mlpreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* Neural network training using modified Levenberg-Marquardt with exact Hessian calculation and regularization. Subroutine trains neural network with restarts from random positions. Algorithm is well suited for small and medium scale problems (hundreds of weights). INPUT PARAMETERS: Network - neural network with initialized geometry XY - training set NPoints - training set size Decay - weight decay constant, >=0.001 Decay term 'Decay*||Weights||^2' is added to error function. If you don't know what Decay to choose, use 0.001. Restarts - number of restarts from random position, >0. If you don't know what Restarts to choose, use 2. OUTPUT PARAMETERS: Network - trained neural network. Info - return code: * -9, if internal matrix inverse subroutine failed * -2, if there is a point with class number outside of [0..NOut-1]. * -1, if wrong parameters specified (NPoints<0, Restarts<1). * 2, if task has been solved. Rep - training report -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
void mlptrainlm(multilayerperceptron &network, const real_2d_array &xy, const ae_int_t npoints, const double decay, const ae_int_t restarts, ae_int_t &info, mlpreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function trains neural network passed to this function, using current dataset (one which was passed to MLPSetDataset() or MLPSetSparseDataset()) and current training settings. Training from NRestarts random starting positions is performed, best network is chosen. Training is performed using current training algorithm. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: S - trainer object Network - neural network. It must have same number of inputs and output/classes as was specified during creation of the trainer object. NRestarts - number of restarts, >=0: * NRestarts>0 means that specified number of random restarts are performed, best network is chosen after training * NRestarts=0 means that current state of the network is used for training. OUTPUT PARAMETERS: Network - trained network NOTE: when no dataset was specified with MLPSetDataset/SetSparseDataset(), network is filled by zero values. Same behavior for functions MLPStartTraining and MLPContinueTraining. NOTE: this method uses sum-of-squares error function for training. -- ALGLIB -- Copyright 23.07.2012 by Bochkanov Sergey *************************************************************************/
void mlptrainnetwork(mlptrainer &s, multilayerperceptron &network, const ae_int_t nrestarts, mlpreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  [5]  [6]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Suppose that we want to classify numbers as positive (class 0) and negative
        // (class 1). We have training set which includes several strictly positive
        // or negative numbers - and zero.
        //
        // The problem is that we are not sure how to classify zero, so from time to
        // time we mark it as positive or negative (with equal probability). Other
        // numbers are marked in pure deterministic setting. How will neural network
        // cope with such classification task?
        //
        // NOTE: we use network with excessive amount of neurons, which guarantees
        //       almost exact reproduction of the training set. Generalization ability
        //       of such network is rather low, but we are not concerned with such
        //       questions in this basic demo.
        //
        mlptrainer trn;
        multilayerperceptron network;
        mlpreport rep;
        real_1d_array x = "[0]";
        real_1d_array y = "[0,0]";

        //
        // Training set. One row corresponds to one record [A => class(A)].
        //
        // Classes are denoted by numbers from 0 to 1, where 0 corresponds to positive
        // numbers and 1 to negative numbers.
        //
        // [ +1  0]
        // [ +2  0]
        // [ -1  1]
        // [ -2  1]
        // [  0  0]   !! sometimes we classify 0 as positive, sometimes as negative
        // [  0  1]   !!
        //
        real_2d_array xy = "[[+1,0],[+2,0],[-1,1],[-2,1],[0,0],[0,1]]";

        //
        //
        // When we solve classification problems, everything is slightly different from
        // the regression ones:
        //
        // 1. Network is created. Because we solve classification problem, we use
        //    mlpcreatec1() function instead of mlpcreate1(). This function creates
        //    classifier network with SOFTMAX-normalized outputs. This network returns
        //    vector of class membership probabilities which are normalized to be
        //    non-negative and sum to 1.0
        //
        // 2. We use mlpcreatetrainercls() function instead of mlpcreatetrainer() to
        //    create trainer object. Trainer object process dataset and neural network
        //    slightly differently to account for specifics of the classification
        //    problems.
        //
        // 3. Dataset is attached to trainer object. Note that dataset format is slightly
        //    different from one used for regression.
        //
        mlpcreatetrainercls(1, 2, trn);
        mlpcreatec1(1, 5, 2, network);
        mlpsetdataset(trn, xy, 6);

        //
        // Network is trained with 5 restarts from random positions
        //
        mlptrainnetwork(trn, network, 5, rep);

        //
        // Test our neural network on strictly positive and strictly negative numbers.
        //
        // IMPORTANT! Classifier network returns class membership probabilities instead
        // of class indexes. Network returns two values (probabilities) instead of one
        // (class index).
        //
        // Thus, for +1 we expect to get [P0,P1] = [1,0], where P0 is probability that
        // number is positive (belongs to class 0), and P1 is probability that number
        // is negative (belongs to class 1).
        //
        // For -1 we expect to get [P0,P1] = [0,1]
        //
        // Following properties are guaranteed by network architecture:
        // * P0>=0, P1>=0   non-negativity
        // * P0+P1=1        normalization
        //
        x = "[1]";
        mlpprocess(network, x, y);
        printf("%s\n", y.tostring(1).c_str()); // EXPECTED: [1.000,0.000]
        x = "[-1]";
        mlpprocess(network, x, y);
        printf("%s\n", y.tostring(1).c_str()); // EXPECTED: [0.000,1.000]

        //
        // But what our network will return for 0, which is between classes 0 and 1?
        //
        // In our dataset it has two different marks assigned (class 0 AND class 1).
        // So network will return something average between class 0 and class 1:
        //     0 => [0.5, 0.5]
        //
        x = "[0]";
        mlpprocess(network, x, y);
        printf("%s\n", y.tostring(1).c_str()); // EXPECTED: [0.500,0.500]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Suppose that we want to classify numbers as positive (class 0) and negative
        // (class 1). We also have one more class for zero (class 2).
        //
        // NOTE: we use network with excessive amount of neurons, which guarantees
        //       almost exact reproduction of the training set. Generalization ability
        //       of such network is rather low, but we are not concerned with such
        //       questions in this basic demo.
        //
        mlptrainer trn;
        multilayerperceptron network;
        mlpreport rep;
        real_1d_array x = "[0]";
        real_1d_array y = "[0,0,0]";

        //
        // Training set. One row corresponds to one record [A => class(A)].
        //
        // Classes are denoted by numbers from 0 to 2, where 0 corresponds to positive
        // numbers, 1 to negative numbers, 2 to zero
        //
        // [ +1  0]
        // [ +2  0]
        // [ -1  1]
        // [ -2  1]
        // [  0  2]
        //
        real_2d_array xy = "[[+1,0],[+2,0],[-1,1],[-2,1],[0,2]]";

        //
        //
        // When we solve classification problems, everything is slightly different from
        // the regression ones:
        //
        // 1. Network is created. Because we solve classification problem, we use
        //    mlpcreatec1() function instead of mlpcreate1(). This function creates
        //    classifier network with SOFTMAX-normalized outputs. This network returns
        //    vector of class membership probabilities which are normalized to be
        //    non-negative and sum to 1.0
        //
        // 2. We use mlpcreatetrainercls() function instead of mlpcreatetrainer() to
        //    create trainer object. Trainer object process dataset and neural network
        //    slightly differently to account for specifics of the classification
        //    problems.
        //
        // 3. Dataset is attached to trainer object. Note that dataset format is slightly
        //    different from one used for regression.
        //
        mlpcreatetrainercls(1, 3, trn);
        mlpcreatec1(1, 5, 3, network);
        mlpsetdataset(trn, xy, 5);

        //
        // Network is trained with 5 restarts from random positions
        //
        mlptrainnetwork(trn, network, 5, rep);

        //
        // Test our neural network on strictly positive and strictly negative numbers.
        //
        // IMPORTANT! Classifier network returns class membership probabilities instead
        // of class indexes. Network returns three values (probabilities) instead of one
        // (class index).
        //
        // Thus, for +1 we expect to get [P0,P1,P2] = [1,0,0],
        // for -1 we expect to get [P0,P1,P2] = [0,1,0],
        // and for 0 we will get [P0,P1,P2] = [0,0,1].
        //
        // Following properties are guaranteed by network architecture:
        // * P0>=0, P1>=0, P2>=0    non-negativity
        // * P0+P1+P2=1             normalization
        //
        x = "[1]";
        mlpprocess(network, x, y);
        printf("%s\n", y.tostring(1).c_str()); // EXPECTED: [1.000,0.000,0.000]
        x = "[-1]";
        mlpprocess(network, x, y);
        printf("%s\n", y.tostring(1).c_str()); // EXPECTED: [0.000,1.000,0.000]
        x = "[0]";
        mlpprocess(network, x, y);
        printf("%s\n", y.tostring(1).c_str()); // EXPECTED: [0.000,0.000,1.000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example shows how to perform cross-validation with ALGLIB
        //
        mlptrainer trn;
        multilayerperceptron network;
        mlpreport rep;

        //
        // Training set: f(x)=1/(x^2+1)
        // One row corresponds to one record [x,f(x)]
        //
        real_2d_array xy = "[[-2.0,0.2],[-1.6,0.3],[-1.3,0.4],[-1,0.5],[-0.6,0.7],[-0.3,0.9],[0,1],[2.0,0.2],[1.6,0.3],[1.3,0.4],[1,0.5],[0.6,0.7],[0.3,0.9]]";

        //
        // Trainer object is created.
        // Dataset is attached to trainer object.
        //
        // NOTE: it is not good idea to perform cross-validation on sample
        //       as small as ours (13 examples). It is done for demonstration
        //       purposes only. Generalization error estimates won't be
        //       precise enough for practical purposes.
        //
        mlpcreatetrainer(1, 1, trn);
        mlpsetdataset(trn, xy, 13);

        //
        // The key property of the cross-validation is that it estimates
        // generalization properties of neural ARCHITECTURE. It does NOT
        // estimates generalization error of some specific network which
        // is passed to the k-fold CV routine.
        //
        // In our example we create 1x4x1 neural network and pass it to
        // CV routine without training it. Original state of the network
        // is not used for cross-validation - each round is restarted from
        // random initial state. Only geometry of network matters.
        //
        // We perform 5 restarts from different random positions for each
        // of the 10 cross-validation rounds.
        //
        mlpcreate1(1, 4, 1, network);
        mlpkfoldcv(trn, network, 5, 10, rep);

        //
        // Cross-validation routine stores estimates of the generalization
        // error to MLP report structure. You may examine its fields and
        // see estimates of different errors (RMS, CE, Avg).
        //
        // Because cross-validation is non-deterministic, in our manual we
        // can not say what values will be stored to rep after call to
        // mlpkfoldcv(). Every CV round will return slightly different
        // estimates.
        //
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example shows how to train early stopping ensebles.
        //
        mlptrainer trn;
        mlpensemble ensemble;
        mlpreport rep;

        //
        // Training set: f(x)=1/(x^2+1)
        // One row corresponds to one record [x,f(x)]
        //
        real_2d_array xy = "[[-2.0,0.2],[-1.6,0.3],[-1.3,0.4],[-1,0.5],[-0.6,0.7],[-0.3,0.9],[0,1],[2.0,0.2],[1.6,0.3],[1.3,0.4],[1,0.5],[0.6,0.7],[0.3,0.9]]";

        //
        // Trainer object is created.
        // Dataset is attached to trainer object.
        //
        // NOTE: it is not good idea to use early stopping ensemble on sample
        //       as small as ours (13 examples). It is done for demonstration
        //       purposes only. Ensemble training algorithm won't find good
        //       solution on such small sample.
        //
        mlpcreatetrainer(1, 1, trn);
        mlpsetdataset(trn, xy, 13);

        //
        // Ensemble is created and trained. Each of 50 network is trained
        // with 5 restarts.
        //
        mlpecreate1(1, 4, 1, 50, ensemble);
        mlptrainensemblees(trn, ensemble, 5, rep);
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example shows how to use parallel functionality of ALGLIB.
        // We generate simple 1-dimensional regression problem and show how
        // to use parallel training, parallel cross-validation, parallel
        // training of neural ensembles.
        //
        // We assume that you already know how to use ALGLIB in serial mode
        // and concentrate on its parallel capabilities.
        //
        // NOTE: it is not good idea to use parallel features on sample as small
        //       as ours (13 examples). It is done only for demonstration purposes.
        //
        mlptrainer trn;
        multilayerperceptron network;
        mlpensemble ensemble;
        mlpreport rep;
        real_2d_array xy = "[[-2.0,0.2],[-1.6,0.3],[-1.3,0.4],[-1,0.5],[-0.6,0.7],[-0.3,0.9],[0,1],[2.0,0.2],[1.6,0.3],[1.3,0.4],[1,0.5],[0.6,0.7],[0.3,0.9]]";
        mlpcreatetrainer(1, 1, trn);
        mlpsetdataset(trn, xy, 13);
        mlpcreate1(1, 4, 1, network);
        mlpecreate1(1, 4, 1, 50, ensemble);

        //
        // Below we demonstrate how to perform:
        // * parallel training of individual networks
        // * parallel cross-validation
        // * parallel training of neural ensembles
        //
        // In order to use multithreading, you have to:
        // 1) Install SMP edition of ALGLIB.
        // 2) This step is specific for C++ users: you should activate OS-specific
        //    capabilities of ALGLIB by defining AE_OS=AE_POSIX (for *nix systems)
        //    or AE_OS=AE_WINDOWS (for Windows systems).
        //    C# users do not have to perform this step because C# programs are
        //    portable across different systems without OS-specific tuning.
        // 3) Tell ALGLIB that you want it to use multithreading by means of
        //    setnworkers() call:
        //          * alglib::setnworkers(0)  = use all cores
        //          * alglib::setnworkers(-1) = leave one core unused
        //          * alglib::setnworkers(-2) = leave two cores unused
        //          * alglib::setnworkers(+2) = use 2 cores (even if you have more)
        //    During runtime ALGLIB will automatically determine whether it is
        //    feasible to start worker threads and split your task between cores.
        //
        alglib::setnworkers(+2);

        //
        // First, we perform parallel training of individual network with 5
        // restarts from random positions. These 5 rounds of  training  are
        // executed in parallel manner,  with  best  network  chosen  after
        // training.
        //
        // ALGLIB can use additional way to speed up computations -  divide
        // dataset   into   smaller   subsets   and   process these subsets
        // simultaneously. It allows us  to  efficiently  parallelize  even
        // single training round. This operation is performed automatically
        // for large datasets, but our toy dataset is too small.
        //
        mlptrainnetwork(trn, network, 5, rep);

        //
        // Then, we perform parallel 10-fold cross-validation, with 5 random
        // restarts per each CV round. I.e., 5*10=50  networks  are trained
        // in total. All these operations can be parallelized.
        //
        // NOTE: again, ALGLIB can parallelize  calculation   of   gradient
        //       over entire dataset - but our dataset is too small.
        //
        mlpkfoldcv(trn, network, 5, 10, rep);

        //
        // Finally, we train early stopping ensemble of 50 neural networks,
        // each  of them is trained with 5 random restarts. I.e.,  5*50=250
        // networks aretrained in total.
        //
        mlptrainensemblees(trn, ensemble, 5, rep);
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // The very simple example on neural network: network is trained to reproduce
        // small 2x2 multiplication table.
        //
        // NOTE: we use network with excessive amount of neurons, which guarantees
        //       almost exact reproduction of the training set. Generalization ability
        //       of such network is rather low, but we are not concerned with such
        //       questions in this basic demo.
        //
        mlptrainer trn;
        multilayerperceptron network;
        mlpreport rep;

        //
        // Training set:
        // * one row corresponds to one record A*B=C in the multiplication table
        // * first two columns store A and B, last column stores C
        //
        // [1 * 1 = 1]
        // [1 * 2 = 2]
        // [2 * 1 = 2]
        // [2 * 2 = 4]
        //
        real_2d_array xy = "[[1,1,1],[1,2,2],[2,1,2],[2,2,4]]";

        //
        // Network is created.
        // Trainer object is created.
        // Dataset is attached to trainer object.
        //
        mlpcreatetrainer(2, 1, trn);
        mlpcreate1(2, 5, 1, network);
        mlpsetdataset(trn, xy, 4);

        //
        // Network is trained with 5 restarts from random positions
        //
        mlptrainnetwork(trn, network, 5, rep);

        //
        // 2*2=?
        //
        real_1d_array x = "[2,2]";
        real_1d_array y = "[0]";
        mlpprocess(network, x, y);
        printf("%s\n", y.tostring(1).c_str()); // EXPECTED: [4.000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Network with 2 inputs and 2 outputs is trained to reproduce vector function:
        //     (x0,x1) => (x0+x1, x0*x1)
        //
        // Informally speaking, we want neural network to simultaneously calculate
        // both sum of two numbers and their product.
        //
        // NOTE: we use network with excessive amount of neurons, which guarantees
        //       almost exact reproduction of the training set. Generalization ability
        //       of such network is rather low, but we are not concerned with such
        //       questions in this basic demo.
        //
        mlptrainer trn;
        multilayerperceptron network;
        mlpreport rep;

        //
        // Training set. One row corresponds to one record [A,B,A+B,A*B].
        //
        // [ 1   1  1+1  1*1 ]
        // [ 1   2  1+2  1*2 ]
        // [ 2   1  2+1  2*1 ]
        // [ 2   2  2+2  2*2 ]
        //
        real_2d_array xy = "[[1,1,2,1],[1,2,3,2],[2,1,3,2],[2,2,4,4]]";

        //
        // Network is created.
        // Trainer object is created.
        // Dataset is attached to trainer object.
        //
        mlpcreatetrainer(2, 2, trn);
        mlpcreate1(2, 5, 2, network);
        mlpsetdataset(trn, xy, 4);

        //
        // Network is trained with 5 restarts from random positions
        //
        mlptrainnetwork(trn, network, 5, rep);

        //
        // 2+1=?
        // 2*1=?
        //
        real_1d_array x = "[2,1]";
        real_1d_array y = "[0,0]";
        mlpprocess(network, x, y);
        printf("%s\n", y.tostring(1).c_str()); // EXPECTED: [3.000,2.000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Trainer object is used to train network. It stores dataset, training settings,
        // and other information which is NOT part of neural network. You should use
        // trainer object as follows:
        // (1) you create trainer object and specify task type (classification/regression)
        //     and number of inputs/outputs
        // (2) you add dataset to the trainer object
        // (3) you may change training settings (stopping criteria or weight decay)
        // (4) finally, you may train one or more networks
        //
        // You may interleave stages 2...4 and repeat them many times. Trainer object
        // remembers its internal state and can be used several times after its creation
        // and initialization.
        //
        mlptrainer trn;

        //
        // Stage 1: object creation.
        //
        // We have to specify number of inputs and outputs. Trainer object can be used
        // only for problems with same number of inputs/outputs as was specified during
        // its creation.
        //
        // In case you want to train SOFTMAX-normalized network which solves classification
        // problems,  you  must  use  another  function  to  create  trainer  object:
        // mlpcreatetrainercls().
        //
        // Below we create trainer object which can be used to train regression networks
        // with 2 inputs and 1 output.
        //
        mlpcreatetrainer(2, 1, trn);

        //
        // Stage 2: specification of the training set
        //
        // By default trainer object stores empty dataset. So to solve your non-empty problem
        // you have to set dataset by passing to trainer dense or sparse matrix.
        //
        // One row of the matrix corresponds to one record A*B=C in the multiplication table.
        // First two columns store A and B, last column stores C
        //
        //     [1 * 1 = 1]   [ 1 1 1 ]
        //     [1 * 2 = 2]   [ 1 2 2 ]
        //     [2 * 1 = 2] = [ 2 1 2 ]
        //     [2 * 2 = 4]   [ 2 2 4 ]
        //
        real_2d_array xy = "[[1,1,1],[1,2,2],[2,1,2],[2,2,4]]";
        mlpsetdataset(trn, xy, 4);

        //
        // Stage 3: modification of the training parameters.
        //
        // You may modify parameters like weights decay or stopping criteria:
        // * we set moderate weight decay
        // * we choose iterations limit as stopping condition (another condition - step size -
        //   is zero, which means than this condition is not active)
        //
        double wstep = 0.000;
        ae_int_t maxits = 100;
        mlpsetdecay(trn, 0.01);
        mlpsetcond(trn, wstep, maxits);

        //
        // Stage 4: training.
        //
        // We will train several networks with different architecture using same trainer object.
        // We may change training parameters or even dataset, so different networks are trained
        // differently. But in this simple example we will train all networks with same settings.
        //
        // We create and train three networks:
        // * network 1 has 2x1 architecture     (2 inputs, no hidden neurons, 1 output)
        // * network 2 has 2x5x1 architecture   (2 inputs, 5 hidden neurons, 1 output)
        // * network 3 has 2x5x5x1 architecture (2 inputs, two hidden layers, 1 output)
        //
        // NOTE: these networks solve regression problems. For classification problems you
        //       should use mlpcreatec0/c1/c2 to create neural networks which have SOFTMAX-
        //       normalized outputs.
        //
        multilayerperceptron net1;
        multilayerperceptron net2;
        multilayerperceptron net3;
        mlpreport rep;

        mlpcreate0(2, 1, net1);
        mlpcreate1(2, 5, 1, net2);
        mlpcreate2(2, 5, 5, 1, net3);

        mlptrainnetwork(trn, net1, 5, rep);
        mlptrainnetwork(trn, net2, 5, rep);
        mlptrainnetwork(trn, net3, 5, rep);
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

kdtree
kdtreerequestbuffer
kdtreebuild
kdtreebuildtagged
kdtreecreaterequestbuffer
kdtreequeryaknn
kdtreequerybox
kdtreequeryknn
kdtreequeryresultsdistances
kdtreequeryresultsdistancesi
kdtreequeryresultstags
kdtreequeryresultstagsi
kdtreequeryresultsx
kdtreequeryresultsxi
kdtreequeryresultsxy
kdtreequeryresultsxyi
kdtreequeryrnn
kdtreequeryrnnu
kdtreeserialize
kdtreetsqueryaknn
kdtreetsquerybox
kdtreetsqueryknn
kdtreetsqueryresultsdistances
kdtreetsqueryresultstags
kdtreetsqueryresultsx
kdtreetsqueryresultsxy
kdtreetsqueryrnn
kdtreetsqueryrnnu
kdtreeunserialize
nneighbor_d_1 Nearest neighbor search, KNN queries
nneighbor_d_2 Serialization of KD-trees
/************************************************************************* KD-tree object. *************************************************************************/
class kdtree { public: kdtree(); kdtree(const kdtree &rhs); kdtree& operator=(const kdtree &rhs); virtual ~kdtree(); };
/************************************************************************* Buffer object which is used to perform nearest neighbor requests in the multithreaded mode (multiple threads working with same KD-tree object). This object should be created with KDTreeCreateRequestBuffer(). *************************************************************************/
class kdtreerequestbuffer { public: kdtreerequestbuffer(); kdtreerequestbuffer(const kdtreerequestbuffer &rhs); kdtreerequestbuffer& operator=(const kdtreerequestbuffer &rhs); virtual ~kdtreerequestbuffer(); };
/************************************************************************* KD-tree creation This subroutine creates KD-tree from set of X-values and optional Y-values INPUT PARAMETERS XY - dataset, array[0..N-1,0..NX+NY-1]. one row corresponds to one point. first NX columns contain X-values, next NY (NY may be zero) columns may contain associated Y-values N - number of points, N>=0. NX - space dimension, NX>=1. NY - number of optional Y-values, NY>=0. NormType- norm type: * 0 denotes infinity-norm * 1 denotes 1-norm * 2 denotes 2-norm (Euclidean norm) OUTPUT PARAMETERS KDT - KD-tree NOTES 1. KD-tree creation have O(N*logN) complexity and O(N*(2*NX+NY)) memory requirements. 2. Although KD-trees may be used with any combination of N and NX, they are more efficient than brute-force search only when N >> 4^NX. So they are most useful in low-dimensional tasks (NX=2, NX=3). NX=1 is another inefficient case, because simple binary search (without additional structures) is much more efficient in such tasks than KD-trees. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreebuild(const real_2d_array &xy, const ae_int_t n, const ae_int_t nx, const ae_int_t ny, const ae_int_t normtype, kdtree &kdt, const xparams _xparams = alglib::xdefault); void kdtreebuild(const real_2d_array &xy, const ae_int_t nx, const ae_int_t ny, const ae_int_t normtype, kdtree &kdt, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* KD-tree creation This subroutine creates KD-tree from set of X-values, integer tags and optional Y-values INPUT PARAMETERS XY - dataset, array[0..N-1,0..NX+NY-1]. one row corresponds to one point. first NX columns contain X-values, next NY (NY may be zero) columns may contain associated Y-values Tags - tags, array[0..N-1], contains integer tags associated with points. N - number of points, N>=0 NX - space dimension, NX>=1. NY - number of optional Y-values, NY>=0. NormType- norm type: * 0 denotes infinity-norm * 1 denotes 1-norm * 2 denotes 2-norm (Euclidean norm) OUTPUT PARAMETERS KDT - KD-tree NOTES 1. KD-tree creation have O(N*logN) complexity and O(N*(2*NX+NY)) memory requirements. 2. Although KD-trees may be used with any combination of N and NX, they are more efficient than brute-force search only when N >> 4^NX. So they are most useful in low-dimensional tasks (NX=2, NX=3). NX=1 is another inefficient case, because simple binary search (without additional structures) is much more efficient in such tasks than KD-trees. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreebuildtagged(const real_2d_array &xy, const integer_1d_array &tags, const ae_int_t n, const ae_int_t nx, const ae_int_t ny, const ae_int_t normtype, kdtree &kdt, const xparams _xparams = alglib::xdefault); void kdtreebuildtagged(const real_2d_array &xy, const integer_1d_array &tags, const ae_int_t nx, const ae_int_t ny, const ae_int_t normtype, kdtree &kdt, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function creates buffer structure which can be used to perform parallel KD-tree requests. KD-tree subpackage provides two sets of request functions - ones which use internal buffer of KD-tree object (these functions are single-threaded because they use same buffer, which can not shared between threads), and ones which use external buffer. This function is used to initialize external buffer. INPUT PARAMETERS KDT - KD-tree which is associated with newly created buffer OUTPUT PARAMETERS Buf - external buffer. IMPORTANT: KD-tree buffer should be used only with KD-tree object which was used to initialize buffer. Any attempt to use buffer with different object is dangerous - you may get integrity check failure (exception) because sizes of internal arrays do not fit to dimensions of KD-tree structure. -- ALGLIB -- Copyright 18.03.2016 by Bochkanov Sergey *************************************************************************/
void kdtreecreaterequestbuffer(const kdtree &kdt, kdtreerequestbuffer &buf, const xparams _xparams = alglib::xdefault);
/************************************************************************* K-NN query: approximate K nearest neighbors IMPORTANT: this function can not be used in multithreaded code because it uses internal temporary buffer of kd-tree object, which can not be shared between multiple threads. If you want to perform parallel requests, use function which uses external request buffer: KDTreeTsQueryAKNN() ("Ts" stands for "thread-safe"). INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. K - number of neighbors to return, K>=1 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True Eps - approximation factor, Eps>=0. eps-approximate nearest neighbor is a neighbor whose distance from X is at most (1+eps) times distance of true nearest neighbor. RESULT number of actual neighbors found (either K or N, if K>N). NOTES significant performance gain may be achieved only when Eps is is on the order of magnitude of 1 or larger. This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain these results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreequeryaknn(kdtree &kdt, const real_1d_array &x, const ae_int_t k, const bool selfmatch, const double eps, const xparams _xparams = alglib::xdefault); ae_int_t kdtreequeryaknn(kdtree &kdt, const real_1d_array &x, const ae_int_t k, const double eps, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Box query: all points within user-specified box. IMPORTANT: this function can not be used in multithreaded code because it uses internal temporary buffer of kd-tree object, which can not be shared between multiple threads. If you want to perform parallel requests, use function which uses external request buffer: KDTreeTsQueryBox() ("Ts" stands for "thread-safe"). INPUT PARAMETERS KDT - KD-tree BoxMin - lower bounds, array[0..NX-1]. BoxMax - upper bounds, array[0..NX-1]. RESULT number of actual neighbors found (in [0,N]). This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain these results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() returns zeros for this request NOTE: this particular query returns unordered results, because there is no meaningful way of ordering points. Furthermore, no 'distance' is associated with points - it is either INSIDE or OUTSIDE (so request for distances will return zeros). -- ALGLIB -- Copyright 14.05.2016 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreequerybox(kdtree &kdt, const real_1d_array &boxmin, const real_1d_array &boxmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* K-NN query: K nearest neighbors IMPORTANT: this function can not be used in multithreaded code because it uses internal temporary buffer of kd-tree object, which can not be shared between multiple threads. If you want to perform parallel requests, use function which uses external request buffer: KDTreeTsQueryKNN() ("Ts" stands for "thread-safe"). INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. K - number of neighbors to return, K>=1 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True RESULT number of actual neighbors found (either K or N, if K>N). This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain these results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreequeryknn(kdtree &kdt, const real_1d_array &x, const ae_int_t k, const bool selfmatch, const xparams _xparams = alglib::xdefault); ae_int_t kdtreequeryknn(kdtree &kdt, const real_1d_array &x, const ae_int_t k, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Distances from last query This function retuns results stored in the internal buffer of kd-tree object. If you performed buffered requests (ones which use instances of kdtreerequestbuffer class), you should call buffered version of this function - kdtreetsqueryresultsdistances(). INPUT PARAMETERS KDT - KD-tree R - possibly pre-allocated buffer. If X is too small to store result, it is resized. If size(X) is enough to store result, it is left unchanged. OUTPUT PARAMETERS R - filled with distances (in corresponding norm) NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsTags() tag values -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreequeryresultsdistances(const kdtree &kdt, real_1d_array &r, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Distances from last query; 'interactive' variant for languages like Python which support constructs like "R = KDTreeQueryResultsDistancesI(KDT)" and interactive mode of interpreter. This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreequeryresultsdistancesi(const kdtree &kdt, real_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* Tags from last query This function retuns results stored in the internal buffer of kd-tree object. If you performed buffered requests (ones which use instances of kdtreerequestbuffer class), you should call buffered version of this function - kdtreetsqueryresultstags(). INPUT PARAMETERS KDT - KD-tree Tags - possibly pre-allocated buffer. If X is too small to store result, it is resized. If size(X) is enough to store result, it is left unchanged. OUTPUT PARAMETERS Tags - filled with tags associated with points, or, when no tags were supplied, with zeros NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreequeryresultstags(const kdtree &kdt, integer_1d_array &tags, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Tags from last query; 'interactive' variant for languages like Python which support constructs like "Tags = KDTreeQueryResultsTagsI(KDT)" and interactive mode of interpreter. This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreequeryresultstagsi(const kdtree &kdt, integer_1d_array &tags, const xparams _xparams = alglib::xdefault);
/************************************************************************* X-values from last query. This function retuns results stored in the internal buffer of kd-tree object. If you performed buffered requests (ones which use instances of kdtreerequestbuffer class), you should call buffered version of this function - kdtreetsqueryresultsx(). INPUT PARAMETERS KDT - KD-tree X - possibly pre-allocated buffer. If X is too small to store result, it is resized. If size(X) is enough to store result, it is left unchanged. OUTPUT PARAMETERS X - rows are filled with X-values NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsTags() tag values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreequeryresultsx(const kdtree &kdt, real_2d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* X-values from last query; 'interactive' variant for languages like Python which support constructs like "X = KDTreeQueryResultsXI(KDT)" and interactive mode of interpreter. This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreequeryresultsxi(const kdtree &kdt, real_2d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* X- and Y-values from last query This function retuns results stored in the internal buffer of kd-tree object. If you performed buffered requests (ones which use instances of kdtreerequestbuffer class), you should call buffered version of this function - kdtreetsqueryresultsxy(). INPUT PARAMETERS KDT - KD-tree XY - possibly pre-allocated buffer. If XY is too small to store result, it is resized. If size(XY) is enough to store result, it is left unchanged. OUTPUT PARAMETERS XY - rows are filled with points: first NX columns with X-values, next NY columns - with Y-values. NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsTags() tag values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreequeryresultsxy(const kdtree &kdt, real_2d_array &xy, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* XY-values from last query; 'interactive' variant for languages like Python which support constructs like "XY = KDTreeQueryResultsXYI(KDT)" and interactive mode of interpreter. This function allocates new array on each call, so it is significantly slower than its 'non-interactive' counterpart, but it is more convenient when you call it from command line. -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreequeryresultsxyi(const kdtree &kdt, real_2d_array &xy, const xparams _xparams = alglib::xdefault);
/************************************************************************* R-NN query: all points within R-sphere centered at X, ordered by distance between point and X (by ascending). NOTE: it is also possible to perform undordered queries performed by means of kdtreequeryrnnu() and kdtreetsqueryrnnu() functions. Such queries are faster because we do not have to use heap structure for sorting. IMPORTANT: this function can not be used in multithreaded code because it uses internal temporary buffer of kd-tree object, which can not be shared between multiple threads. If you want to perform parallel requests, use function which uses external request buffer: kdtreetsqueryrnn() ("Ts" stands for "thread-safe"). INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. R - radius of sphere (in corresponding norm), R>0 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True RESULT number of neighbors found, >=0 This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain actual results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreequeryrnn(kdtree &kdt, const real_1d_array &x, const double r, const bool selfmatch, const xparams _xparams = alglib::xdefault); ae_int_t kdtreequeryrnn(kdtree &kdt, const real_1d_array &x, const double r, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* R-NN query: all points within R-sphere centered at X, no ordering by distance as undicated by "U" suffix (faster that ordered query, for large queries - significantly faster). IMPORTANT: this function can not be used in multithreaded code because it uses internal temporary buffer of kd-tree object, which can not be shared between multiple threads. If you want to perform parallel requests, use function which uses external request buffer: kdtreetsqueryrnn() ("Ts" stands for "thread-safe"). INPUT PARAMETERS KDT - KD-tree X - point, array[0..NX-1]. R - radius of sphere (in corresponding norm), R>0 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True RESULT number of neighbors found, >=0 This subroutine performs query and stores its result in the internal structures of the KD-tree. You can use following subroutines to obtain actual results: * KDTreeQueryResultsX() to get X-values * KDTreeQueryResultsXY() to get X- and Y-values * KDTreeQueryResultsTags() to get tag values * KDTreeQueryResultsDistances() to get distances As indicated by "U" suffix, this function returns unordered results. -- ALGLIB -- Copyright 01.11.2018 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreequeryrnnu(kdtree &kdt, const real_1d_array &x, const double r, const bool selfmatch, const xparams _xparams = alglib::xdefault); ae_int_t kdtreequeryrnnu(kdtree &kdt, const real_1d_array &x, const double r, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void kdtreeserialize(const kdtree &obj, std::string &s_out); void kdtreeserialize(const kdtree &obj, std::ostream &s_out);
/************************************************************************* K-NN query: approximate K nearest neighbors, using thread-local buffer. You can call this function from multiple threads for same kd-tree instance, assuming that different instances of buffer object are passed to different threads. INPUT PARAMETERS KDT - KD-tree Buf - request buffer object created for this particular instance of kd-tree structure with kdtreecreaterequestbuffer() function. X - point, array[0..NX-1]. K - number of neighbors to return, K>=1 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True Eps - approximation factor, Eps>=0. eps-approximate nearest neighbor is a neighbor whose distance from X is at most (1+eps) times distance of true nearest neighbor. RESULT number of actual neighbors found (either K or N, if K>N). NOTES significant performance gain may be achieved only when Eps is is on the order of magnitude of 1 or larger. This subroutine performs query and stores its result in the internal structures of the buffer object. You can use following subroutines to obtain these results (pay attention to "buf" in their names): * KDTreeTsQueryResultsX() to get X-values * KDTreeTsQueryResultsXY() to get X- and Y-values * KDTreeTsQueryResultsTags() to get tag values * KDTreeTsQueryResultsDistances() to get distances IMPORTANT: kd-tree buffer should be used only with KD-tree object which was used to initialize buffer. Any attempt to use biffer with different object is dangerous - you may get integrity check failure (exception) because sizes of internal arrays do not fit to dimensions of KD-tree structure. -- ALGLIB -- Copyright 18.03.2016 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreetsqueryaknn(const kdtree &kdt, kdtreerequestbuffer &buf, const real_1d_array &x, const ae_int_t k, const bool selfmatch, const double eps, const xparams _xparams = alglib::xdefault); ae_int_t kdtreetsqueryaknn(const kdtree &kdt, kdtreerequestbuffer &buf, const real_1d_array &x, const ae_int_t k, const double eps, const xparams _xparams = alglib::xdefault);
/************************************************************************* Box query: all points within user-specified box, using thread-local buffer. You can call this function from multiple threads for same kd-tree instance, assuming that different instances of buffer object are passed to different threads. INPUT PARAMETERS KDT - KD-tree Buf - request buffer object created for this particular instance of kd-tree structure with kdtreecreaterequestbuffer() function. BoxMin - lower bounds, array[0..NX-1]. BoxMax - upper bounds, array[0..NX-1]. RESULT number of actual neighbors found (in [0,N]). This subroutine performs query and stores its result in the internal structures of the buffer object. You can use following subroutines to obtain these results (pay attention to "ts" in their names): * KDTreeTsQueryResultsX() to get X-values * KDTreeTsQueryResultsXY() to get X- and Y-values * KDTreeTsQueryResultsTags() to get tag values * KDTreeTsQueryResultsDistances() returns zeros for this query NOTE: this particular query returns unordered results, because there is no meaningful way of ordering points. Furthermore, no 'distance' is associated with points - it is either INSIDE or OUTSIDE (so request for distances will return zeros). IMPORTANT: kd-tree buffer should be used only with KD-tree object which was used to initialize buffer. Any attempt to use biffer with different object is dangerous - you may get integrity check failure (exception) because sizes of internal arrays do not fit to dimensions of KD-tree structure. -- ALGLIB -- Copyright 14.05.2016 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreetsquerybox(const kdtree &kdt, kdtreerequestbuffer &buf, const real_1d_array &boxmin, const real_1d_array &boxmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* K-NN query: K nearest neighbors, using external thread-local buffer. You can call this function from multiple threads for same kd-tree instance, assuming that different instances of buffer object are passed to different threads. INPUT PARAMETERS KDT - kd-tree Buf - request buffer object created for this particular instance of kd-tree structure with kdtreecreaterequestbuffer() function. X - point, array[0..NX-1]. K - number of neighbors to return, K>=1 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True RESULT number of actual neighbors found (either K or N, if K>N). This subroutine performs query and stores its result in the internal structures of the buffer object. You can use following subroutines to obtain these results (pay attention to "buf" in their names): * KDTreeTsQueryResultsX() to get X-values * KDTreeTsQueryResultsXY() to get X- and Y-values * KDTreeTsQueryResultsTags() to get tag values * KDTreeTsQueryResultsDistances() to get distances IMPORTANT: kd-tree buffer should be used only with KD-tree object which was used to initialize buffer. Any attempt to use biffer with different object is dangerous - you may get integrity check failure (exception) because sizes of internal arrays do not fit to dimensions of KD-tree structure. -- ALGLIB -- Copyright 18.03.2016 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreetsqueryknn(const kdtree &kdt, kdtreerequestbuffer &buf, const real_1d_array &x, const ae_int_t k, const bool selfmatch, const xparams _xparams = alglib::xdefault); ae_int_t kdtreetsqueryknn(const kdtree &kdt, kdtreerequestbuffer &buf, const real_1d_array &x, const ae_int_t k, const xparams _xparams = alglib::xdefault);
/************************************************************************* Distances from last query associated with kdtreerequestbuffer object. This function retuns results stored in the internal buffer of kd-tree object. If you performed buffered requests (ones which use instances of kdtreerequestbuffer class), you should call buffered version of this function - KDTreeTsqueryresultsdistances(). INPUT PARAMETERS KDT - KD-tree Buf - request buffer object created for this particular instance of kd-tree structure. R - possibly pre-allocated buffer. If X is too small to store result, it is resized. If size(X) is enough to store result, it is left unchanged. OUTPUT PARAMETERS R - filled with distances (in corresponding norm) NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsTags() tag values -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreetsqueryresultsdistances(const kdtree &kdt, const kdtreerequestbuffer &buf, real_1d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* Tags from last query associated with kdtreerequestbuffer object. This function retuns results stored in the internal buffer of kd-tree object. If you performed buffered requests (ones which use instances of kdtreerequestbuffer class), you should call buffered version of this function - KDTreeTsqueryresultstags(). INPUT PARAMETERS KDT - KD-tree Buf - request buffer object created for this particular instance of kd-tree structure. Tags - possibly pre-allocated buffer. If X is too small to store result, it is resized. If size(X) is enough to store result, it is left unchanged. OUTPUT PARAMETERS Tags - filled with tags associated with points, or, when no tags were supplied, with zeros NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreetsqueryresultstags(const kdtree &kdt, const kdtreerequestbuffer &buf, integer_1d_array &tags, const xparams _xparams = alglib::xdefault);
/************************************************************************* X-values from last query associated with kdtreerequestbuffer object. INPUT PARAMETERS KDT - KD-tree Buf - request buffer object created for this particular instance of kd-tree structure. X - possibly pre-allocated buffer. If X is too small to store result, it is resized. If size(X) is enough to store result, it is left unchanged. OUTPUT PARAMETERS X - rows are filled with X-values NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsXY() X- and Y-values * KDTreeQueryResultsTags() tag values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreetsqueryresultsx(const kdtree &kdt, const kdtreerequestbuffer &buf, real_2d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* X- and Y-values from last query associated with kdtreerequestbuffer object. INPUT PARAMETERS KDT - KD-tree Buf - request buffer object created for this particular instance of kd-tree structure. XY - possibly pre-allocated buffer. If XY is too small to store result, it is resized. If size(XY) is enough to store result, it is left unchanged. OUTPUT PARAMETERS XY - rows are filled with points: first NX columns with X-values, next NY columns - with Y-values. NOTES 1. points are ordered by distance from the query point (first = closest) 2. if XY is larger than required to store result, only leading part will be overwritten; trailing part will be left unchanged. So if on input XY = [[A,B],[C,D]], and result is [1,2], then on exit we will get XY = [[1,2],[C,D]]. This is done purposely to increase performance; if you want function to resize array according to result size, use function with same name and suffix 'I'. SEE ALSO * KDTreeQueryResultsX() X-values * KDTreeQueryResultsTags() tag values * KDTreeQueryResultsDistances() distances -- ALGLIB -- Copyright 28.02.2010 by Bochkanov Sergey *************************************************************************/
void kdtreetsqueryresultsxy(const kdtree &kdt, const kdtreerequestbuffer &buf, real_2d_array &xy, const xparams _xparams = alglib::xdefault);
/************************************************************************* R-NN query: all points within R-sphere centered at X, using external thread-local buffer, sorted by distance between point and X (by ascending) You can call this function from multiple threads for same kd-tree instance, assuming that different instances of buffer object are passed to different threads. NOTE: it is also possible to perform undordered queries performed by means of kdtreequeryrnnu() and kdtreetsqueryrnnu() functions. Such queries are faster because we do not have to use heap structure for sorting. INPUT PARAMETERS KDT - KD-tree Buf - request buffer object created for this particular instance of kd-tree structure with kdtreecreaterequestbuffer() function. X - point, array[0..NX-1]. R - radius of sphere (in corresponding norm), R>0 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True RESULT number of neighbors found, >=0 This subroutine performs query and stores its result in the internal structures of the buffer object. You can use following subroutines to obtain these results (pay attention to "buf" in their names): * KDTreeTsQueryResultsX() to get X-values * KDTreeTsQueryResultsXY() to get X- and Y-values * KDTreeTsQueryResultsTags() to get tag values * KDTreeTsQueryResultsDistances() to get distances IMPORTANT: kd-tree buffer should be used only with KD-tree object which was used to initialize buffer. Any attempt to use biffer with different object is dangerous - you may get integrity check failure (exception) because sizes of internal arrays do not fit to dimensions of KD-tree structure. -- ALGLIB -- Copyright 18.03.2016 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreetsqueryrnn(const kdtree &kdt, kdtreerequestbuffer &buf, const real_1d_array &x, const double r, const bool selfmatch, const xparams _xparams = alglib::xdefault); ae_int_t kdtreetsqueryrnn(const kdtree &kdt, kdtreerequestbuffer &buf, const real_1d_array &x, const double r, const xparams _xparams = alglib::xdefault);
/************************************************************************* R-NN query: all points within R-sphere centered at X, using external thread-local buffer, no ordering by distance as undicated by "U" suffix (faster that ordered query, for large queries - significantly faster). You can call this function from multiple threads for same kd-tree instance, assuming that different instances of buffer object are passed to different threads. INPUT PARAMETERS KDT - KD-tree Buf - request buffer object created for this particular instance of kd-tree structure with kdtreecreaterequestbuffer() function. X - point, array[0..NX-1]. R - radius of sphere (in corresponding norm), R>0 SelfMatch - whether self-matches are allowed: * if True, nearest neighbor may be the point itself (if it exists in original dataset) * if False, then only points with non-zero distance are returned * if not given, considered True RESULT number of neighbors found, >=0 This subroutine performs query and stores its result in the internal structures of the buffer object. You can use following subroutines to obtain these results (pay attention to "buf" in their names): * KDTreeTsQueryResultsX() to get X-values * KDTreeTsQueryResultsXY() to get X- and Y-values * KDTreeTsQueryResultsTags() to get tag values * KDTreeTsQueryResultsDistances() to get distances As indicated by "U" suffix, this function returns unordered results. IMPORTANT: kd-tree buffer should be used only with KD-tree object which was used to initialize buffer. Any attempt to use biffer with different object is dangerous - you may get integrity check failure (exception) because sizes of internal arrays do not fit to dimensions of KD-tree structure. -- ALGLIB -- Copyright 18.03.2016 by Bochkanov Sergey *************************************************************************/
ae_int_t kdtreetsqueryrnnu(const kdtree &kdt, kdtreerequestbuffer &buf, const real_1d_array &x, const double r, const bool selfmatch, const xparams _xparams = alglib::xdefault); ae_int_t kdtreetsqueryrnnu(const kdtree &kdt, kdtreerequestbuffer &buf, const real_1d_array &x, const double r, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void kdtreeunserialize(const std::string &s_in, kdtree &obj); void kdtreeunserialize(const std::istream &s_in, kdtree &obj);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "alglibmisc.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_2d_array a = "[[0,0],[0,1],[1,0],[1,1]]";
        ae_int_t nx = 2;
        ae_int_t ny = 0;
        ae_int_t normtype = 2;
        kdtree kdt;
        real_1d_array x;
        real_1d_array x1;
        real_2d_array r = "[[]]";
        ae_int_t k;

        kdtreebuild(a, nx, ny, normtype, kdt);

        x = "[-1,0]";
        k = kdtreequeryknn(kdt, x, 1);
        printf("%d\n", int(k)); // EXPECTED: 1
        kdtreequeryresultsx(kdt, r);
        printf("%s\n", r.tostring(1).c_str()); // EXPECTED: [[0,0]]

        x1 = "[+0.9,0.1]";
        k = kdtreequeryknn(kdt, x1, 1);
        printf("%d\n", int(k)); // EXPECTED: 1
        kdtreequeryresultsx(kdt, r);
        printf("%s\n", r.tostring(1).c_str()); // EXPECTED: [[1,0]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "alglibmisc.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        real_2d_array a = "[[0,0],[0,1],[1,0],[1,1]]";
        ae_int_t nx = 2;
        ae_int_t ny = 0;
        ae_int_t normtype = 2;
        kdtree kdt0;
        kdtree kdt1;
        std::string s;
        real_1d_array x;
        real_2d_array r0 = "[[]]";
        real_2d_array r1 = "[[]]";

        //
        // Build tree and serialize it
        //
        kdtreebuild(a, nx, ny, normtype, kdt0);
        alglib::kdtreeserialize(kdt0, s);
        alglib::kdtreeunserialize(s, kdt1);

        //
        // Compare results from KNN queries
        //
        x = "[-1,0]";
        kdtreequeryknn(kdt0, x, 1);
        kdtreequeryresultsx(kdt0, r0);
        kdtreequeryknn(kdt1, x, 1);
        kdtreequeryresultsx(kdt1, r1);
        printf("%s\n", r0.tostring(1).c_str()); // EXPECTED: [[0,0]]
        printf("%s\n", r1.tostring(1).c_str()); // EXPECTED: [[0,0]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

nleqreport
nleqstate
nleqcreatelm
nleqiteration
nleqrestartfrom
nleqresults
nleqresultsbuf
nleqsetcond
nleqsetstpmax
nleqsetxrep
nleqsolve
/************************************************************************* *************************************************************************/
class nleqreport { public: nleqreport(); nleqreport(const nleqreport &rhs); nleqreport& operator=(const nleqreport &rhs); virtual ~nleqreport(); ae_int_t iterationscount; ae_int_t nfunc; ae_int_t njac; ae_int_t terminationtype; };
/************************************************************************* *************************************************************************/
class nleqstate { public: nleqstate(); nleqstate(const nleqstate &rhs); nleqstate& operator=(const nleqstate &rhs); virtual ~nleqstate(); };
/************************************************************************* LEVENBERG-MARQUARDT-LIKE NONLINEAR SOLVER DESCRIPTION: This algorithm solves system of nonlinear equations F[0](x[0], ..., x[n-1]) = 0 F[1](x[0], ..., x[n-1]) = 0 ... F[M-1](x[0], ..., x[n-1]) = 0 with M/N do not necessarily coincide. Algorithm converges quadratically under following conditions: * the solution set XS is nonempty * for some xs in XS there exist such neighbourhood N(xs) that: * vector function F(x) and its Jacobian J(x) are continuously differentiable on N * ||F(x)|| provides local error bound on N, i.e. there exists such c1, that ||F(x)||>c1*distance(x,XS) Note that these conditions are much more weaker than usual non-singularity conditions. For example, algorithm will converge for any affine function F (whether its Jacobian singular or not). REQUIREMENTS: Algorithm will request following information during its operation: * function vector F[] and Jacobian matrix at given point X * value of merit function f(x)=F[0]^2(x)+...+F[M-1]^2(x) at given point X USAGE: 1. User initializes algorithm state with NLEQCreateLM() call 2. User tunes solver parameters with NLEQSetCond(), NLEQSetStpMax() and other functions 3. User calls NLEQSolve() function which takes algorithm state and pointers (delegates, etc.) to callback functions which calculate merit function value and Jacobian. 4. User calls NLEQResults() to get solution 5. Optionally, user may call NLEQRestartFrom() to solve another problem with same parameters (N/M) but another starting point and/or another function vector. NLEQRestartFrom() allows to reuse already initialized structure. INPUT PARAMETERS: N - space dimension, N>1: * if provided, only leading N elements of X are used * if not provided, determined automatically from size of X M - system size X - starting point OUTPUT PARAMETERS: State - structure which stores algorithm state NOTES: 1. you may tune stopping conditions with NLEQSetCond() function 2. if target function contains exp() or other fast growing functions, and optimization algorithm makes too large steps which leads to overflow, use NLEQSetStpMax() function to bound algorithm's steps. 3. this algorithm is a slightly modified implementation of the method described in 'Levenberg-Marquardt method for constrained nonlinear equations with strong local convergence properties' by Christian Kanzow Nobuo Yamashita and Masao Fukushima and further developed in 'On the convergence of a New Levenberg-Marquardt Method' by Jin-yan Fan and Ya-Xiang Yuan. -- ALGLIB -- Copyright 20.08.2009 by Bochkanov Sergey *************************************************************************/
void nleqcreatelm(const ae_int_t n, const ae_int_t m, const real_1d_array &x, nleqstate &state, const xparams _xparams = alglib::xdefault); void nleqcreatelm(const ae_int_t m, const real_1d_array &x, nleqstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool nleqiteration(nleqstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts CG algorithm from new point. All optimization parameters are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - structure used for reverse communication previously allocated with MinCGCreate call. X - new starting point. BndL - new lower bounds BndU - new upper bounds -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
void nleqrestartfrom(nleqstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* NLEQ solver results INPUT PARAMETERS: State - algorithm state. OUTPUT PARAMETERS: X - array[0..N-1], solution Rep - optimization report: * Rep.TerminationType completetion code: * -4 ERROR: algorithm has converged to the stationary point Xf which is local minimum of f=F[0]^2+...+F[m-1]^2, but is not solution of nonlinear system. * 1 sqrt(f)<=EpsF. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * Rep.IterationsCount contains iterations count * NFEV countains number of function calculations * ActiveConstraints contains number of active constraints -- ALGLIB -- Copyright 20.08.2009 by Bochkanov Sergey *************************************************************************/
void nleqresults(const nleqstate &state, real_1d_array &x, nleqreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* NLEQ solver results Buffered implementation of NLEQResults(), which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 20.08.2009 by Bochkanov Sergey *************************************************************************/
void nleqresultsbuf(const nleqstate &state, real_1d_array &x, nleqreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping conditions for the nonlinear solver INPUT PARAMETERS: State - structure which stores algorithm state EpsF - >=0 The subroutine finishes its work if on k+1-th iteration the condition ||F||<=EpsF is satisfied MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsF=0 and MaxIts=0 simultaneously will lead to automatic stopping criterion selection (small EpsF). NOTES: -- ALGLIB -- Copyright 20.08.2010 by Bochkanov Sergey *************************************************************************/
void nleqsetcond(nleqstate &state, const double epsf, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets maximum step length INPUT PARAMETERS: State - structure which stores algorithm state StpMax - maximum step length, >=0. Set StpMax to 0.0, if you don't want to limit step length. Use this subroutine when target function contains exp() or other fast growing functions, and algorithm makes too large steps which lead to overflow. This function allows us to reject steps that are too large (and therefore expose us to the possible overflow) without actually calculating function value at the x+stp*d. -- ALGLIB -- Copyright 20.08.2010 by Bochkanov Sergey *************************************************************************/
void nleqsetstpmax(nleqstate &state, const double stpmax, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to NLEQSolve(). -- ALGLIB -- Copyright 20.08.2010 by Bochkanov Sergey *************************************************************************/
void nleqsetxrep(nleqstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This family of functions is used to launch iterations of nonlinear solver These functions accept following parameters: state - algorithm state func - callback which calculates function (or merit function) value func at given point x jac - callback which calculates function vector fi[] and Jacobian jac at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL -- ALGLIB -- Copyright 20.03.2009 by Bochkanov Sergey *************************************************************************/
void nleqsolve(nleqstate &state, void (*func)(const real_1d_array &x, double &func, void *ptr), void (*jac)(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);
nlsreport
nlsstate
nlscreatedfo
nlsiteration
nlsoptimize
nlsrequesttermination
nlsrestartfrom
nlsresults
nlsresultsbuf
nlssetalgo2ps
nlssetalgodfolsa
nlssetbc
nlssetcond
nlssetscale
nlssetxrep
nls_derivative_free Nonlinear least squares optimization using derivative-free algorithms
/************************************************************************* Optimization report, filled by NLSResults() function FIELDS: * TerminationType, completion code, which is a sum of a BASIC code and an ADDITIONAL code. The following basic codes denote failure: * -8 optimizer detected NAN/INF either in the function itself, or its Jacobian; recovery was impossible, abnormal termination reported. * -3 box constraints are inconsistent The following basic codes denote success: * 2 relative step is no more than EpsX. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * 8 terminated by user who called NLSRequestTermination(). X contains point which was "current accepted" when termination request was submitted. Additional codes can be set on success, but not on failure: * +800 if during algorithm execution the solver encountered NAN/INF values in the target or constraints but managed to recover by reducing trust region radius, the solver returns one of SUCCESS codes but adds +800 to the code. * IterationsCount, contains iterations count * NFunc, number of function calculations *************************************************************************/
class nlsreport { public: nlsreport(); nlsreport(const nlsreport &rhs); nlsreport& operator=(const nlsreport &rhs); virtual ~nlsreport(); ae_int_t iterationscount; ae_int_t terminationtype; ae_int_t nfunc; };
/************************************************************************* Nonlinear least squares solver *************************************************************************/
class nlsstate { public: nlsstate(); nlsstate(const nlsstate &rhs); nlsstate& operator=(const nlsstate &rhs); virtual ~nlsstate(); };
/************************************************************************* DERIVATIVE-FREE NONLINEAR LEAST SQUARES DESCRIPTION: This function creates a NLS solver configured to solve a constrained nonlinear least squares problem min F(x) = f[0]^2 + f[1]^2 + ... + f[m-1]^2 where f[i] are available, but not their derivatives. The functions f[i] are assumed to be smooth, but may have some amount of numerical noise (either random noise or deterministic noise arising from numerical simulations or other complex numerical processes). INPUT PARAMETERS: N - dimension, N>1 * if given, only leading N elements of X are used * if not given, automatically determined from size of X M - number of functions f[i], M>=1 X - initial point, array[N] OUTPUT PARAMETERS: State - structure which stores algorithm state -- ALGLIB -- Copyright 15.10.2023 by Bochkanov Sergey *************************************************************************/
void nlscreatedfo(const ae_int_t n, const ae_int_t m, const real_1d_array &x, nlsstate &state, const xparams _xparams = alglib::xdefault); void nlscreatedfo(const ae_int_t m, const real_1d_array &x, nlsstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool nlsiteration(nlsstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This family of functions is used to launch iterations of nonlinear optimizer These functions accept following parameters: state - algorithm state fvec - callback which calculates function vector fi[] at given point x jac - callback which calculates function vector fi[] and Jacobian jac at given point x rep - optional callback which is called after each iteration can be NULL ptr - optional pointer which is passed to func/grad/hess/jac/rep can be NULL CALLBACK PARALLELISM The NLS optimizer supports parallel model evaluation ('callback parallelism'). This feature, which is present in commercial ALGLIB editions, greatly accelerates optimization when using a solver which issues batch requests, i.e. multiple requests for target values, which can be computed independently by different threads. Callback parallelism is usually beneficial when processing a batch request requires more than several milliseconds. It also requires the solver which issues requests in convenient batches, e.g. 2PS solver. See ALGLIB Reference Manual, 'Working with commercial version' section for more information. -- ALGLIB -- Copyright 15.10.2023 by Bochkanov Sergey *************************************************************************/
void nlsoptimize(nlsstate &state, void (*fvec)(const real_1d_array &x, real_1d_array &fi, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault); void nlsoptimize(nlsstate &state, void (*fvec)(const real_1d_array &x, real_1d_array &fi, void *ptr), void (*jac)(const real_1d_array &x, real_1d_array &fi, real_2d_array &jac, void *ptr), void (*rep)(const real_1d_array &x, double func, void *ptr) = NULL, void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine submits request for termination of running optimizer. It should be called from user-supplied callback when user decides that it is time to "smoothly" terminate optimization process. As result, optimizer stops at point which was "current accepted" when termination request was submitted and returns error code 8 (successful termination). INPUT PARAMETERS: State - optimizer structure NOTE: after request for termination optimizer may perform several additional calls to user-supplied callbacks. It does NOT guarantee to stop immediately - it just guarantees that these additional calls will be discarded later. NOTE: calling this function on optimizer which is NOT running will have no effect. NOTE: multiple calls to this function are possible. First call is counted, subsequent calls are silently ignored. -- ALGLIB -- Copyright 08.10.2014 by Bochkanov Sergey *************************************************************************/
void nlsrequesttermination(nlsstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine restarts solver from the new point. All optimization parameters are left unchanged. This function allows to solve multiple optimization problems (which must have same number of dimensions) without object reallocation penalty. INPUT PARAMETERS: State - optimizer X - new starting point. -- ALGLIB -- Copyright 30.07.2010 by Bochkanov Sergey *************************************************************************/
void nlsrestartfrom(nlsstate &state, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Nonlinear least squares solver results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: X - array[N], solution Rep - optimization report; includes termination codes and additional information. Termination codes are returned in rep.terminationtype field, its possible values are listed below, see comments for this structure for more info. The termination code is a sum of a basic code (success or failure) and one/several additional codes. Additional codes are returned only for successful termination. The following basic codes can be returned: * -8 optimizer detected NAN/INF values in the target or nonlinear constraints and failed to recover * -3 box constraints are inconsistent * 2 relative step is no more than EpsX. * 5 MaxIts steps was taken * 7 stopping conditions are too stringent, further improvement is impossible * 8 terminated by user who called nlsrequesttermination(). X contains point which was "current accepted" when termination request was submitted. The following additional codes can be returned (added to a basic code): * +800 if during algorithm execution the solver encountered NAN/INF values in the target or constraints but managed to recover by reducing trust region radius, the solver returns one of SUCCESS codes but adds +800 to the code. -- ALGLIB -- Copyright 15.10.2023 by Bochkanov Sergey *************************************************************************/
void nlsresults(const nlsstate &state, real_1d_array &x, nlsreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Buffered implementation of NLSResults(), which uses pre-allocated buffer to store X[]. If buffer size is too small, it resizes buffer. It is intended to be used in the inner cycles of performance critical algorithms where array reallocation penalty is too large to be ignored. -- ALGLIB -- Copyright 10.03.2009 by Bochkanov Sergey *************************************************************************/
void nlsresultsbuf(const nlsstate &state, real_1d_array &x, nlsreport &rep, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets the derivative-free NLS optimization algorithm to the 2PS (2-Point Stencil) algorithm. This solver is recommended for the following cases: * an expensive target function is minimized by the commercial ALGLIB with callback parallelism activated (see ALGLIB Reference Manual for more information about parallel callbacks) * an inexpensive target function is minimized by any ALGLIB edition (free or commercial) This function works only with solvers created with nlscreatedfo(), i.e. in the derivative-free mode. See the end of this comment for more information about the algorithm. INPUT PARAMETERS: State - solver; must be created with nlscreatedfo() call - passing an object initialized with another constructor function will result in an exception. NNoisyRestarts - number of restarts performed to combat a noise in the target. (see below, section 'RESTARTS', for a detailed discussion): * 0 means that no restarts is performed, the solver stops as soon as stopping criteria are met. Recommended for noise-free tasks. * >0 means that when the stopping criteria are met, the solver will perform a restart: increase the trust radius and resample points. It often helps to solve problems with random or deterministic noise. ALGORITHM DESCRIPTION AND DISCUSSION The 2PS algorithm is a derivative-free model-based nonlinear least squares solver which builds local models by evaluating the target at N additional points around the current one, with geometry similar to the 2-point finite difference stencil. Similarly to the Levenberg-Marquardt algorithm, the solver shows quadratic convergence despite the fact that it builds linear models. When compared with the DFO-LSA solver, the 2PS algorithm has the following distinctive properties: * the 2PS algorithm performs more target function evaluations per iteration (at least N+1 instead of 1-2 usually performed by the DFO-LSA) * 2PS requires several times less iterations than the DFO-LSA because each iteration extracts and utilizes more information about the target. This difference tends to exaggerate when N increases * contrary to that, DFO-LSA is much better at reuse of previously computed points. Thus, DFO-LSA needs several times less target evaluations than 2PS, usually about 3-4 times less (this ratio seems to be more or less constant independently of N). The summary is that: * for expensive targets 2PS provides better parallelism potential than DFO-LSA because the former issues many simultaneous target evaluation requests which can be easily parallelized. It is possible for 2PS to outperform DFO-LSA by parallelism alone, despite the fact that the latter needs 3-4 times less target function evaluations. * for inexpensive targets 2PS may win because it needs many times less iterations, and thus the overhead associated with the working set updates is also many times less. RESTARTS Restarts is a strategy used to deal with random and deterministic noise in the target/constraints. Noise in the objective function can be random, arising from measurement or simulation uncertainty, or deterministic, resulting from complex underlying phenomena like numerical errors or branches in the target. Its influence is especially high at last stages of the optimization, when all computations are performed with small values of a trust radius. Restarts allow the optimization algorithm to be robust against both types of noise by temporarily increasing a trust radius in order to capture a global structure of the target and avoid being trapped by noise-produced local features. A restart is usually performed when the stopping criteria are triggered. Instead of stopping, the solver increases trust radius to its initial value and tries to rebuild a model. If you decide to optimize with restarts, it is recommended to perform a small amount of restarts, up to 5. Generally, restarts do not allow one to completely solve the problem of noise, but still it is possible to achieve some additional progress. -- ALGLIB -- Copyright 15.10.2023 by Bochkanov Sergey *************************************************************************/
void nlssetalgo2ps(nlsstate &state, const ae_int_t nnoisyrestarts, const xparams _xparams = alglib::xdefault); void nlssetalgo2ps(nlsstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets the derivative-free NLS optimization algorithm to the DFO-LSA algorithm, an ALGLIB implementation (with several modifications)of the original DFO-LS algorithm by Cartis, C., Fiala, J., Marteau, B. and Roberts, L. ('Improving the Flexibility and Robustness of Model-Based Derivative-Free Optimization Solvers', 2019). The A in DFO-LSA stands for ALGLIB, in order to distinguish our slightly modified implementation from the original algorithm. This solver is recommended for the following case: an expensive target function is minimized without parallelism being used (either free ALGLIB is used or commercial one is used but the target callback is non-reentrant i.e. it can not be simultaneously called from multiple threads) This function works only with solvers created with nlscreatedfo(), i.e. in the derivative-free mode. See the end of this comment for more information about the algorithm. INPUT PARAMETERS: State - solver; must be created with nlscreatedfo() call - passing an object initialized with another constructor function will result in an exception. NNoisyRestarts - number of restarts performed to combat a noise in the target. (see below, section 'RESTARTS', for a detailed discussion): * 0 means that no restarts is performed, the solver stops as soon as stopping criteria are met. Recommended for noise-free tasks. * >0 means that when the stopping criteria are met, the solver will perform a restart: increase the trust radius and resample points. It often helps to solve problems with random or deterministic noise. ALGORITHM DESCRIPTION AND DISCUSSION The DFO-LSA algorithm is a derivative-free model-based NLS solver which builds local models by remembering N+1 previously computed target values and updating them as optimization progresses. Similarly to the Levenberg-Marquardt algorithm, the solver shows quadratic convergence despite the fact that it builds linear models. Our implementation generally follows the same lines as the original DFO-LSA, with several modifications to trust radius update strategies, stability fixes (unlike original DFO-LS, our implementation can handle and recover from the target breaking down due to infeasible arguments) and other minor implementation details. When compared with the 2PS solver, the DFO-LSA algorithm has the following distinctive properties: * the 2PS algorithm performs more target function evaluations per iteration (at least N+1 instead of 1-2 usually performed by DFO-LSA) * 2PS requires several times less iterations than DFO-LSA because each iterations extracts and utilizes more information about the target. This difference tends to exaggerate when N increases * contrary to that, DFO-LSA is much better at reuse of previously computed points. Thus, DFO-LSA needs several times less target evaluations than 2PS, usually about 3-4 times less (this ratio seems to be more or less constant independently of N). The summary is that: * for expensive targets DFO-LSA is much more efficient than 2PS because it reuses previously computed target values as much as possible. * however, DFO-LSA has little parallelism potential because (unlike 2PS) it does not evaluate the target in several points simultaneously and independently * additionally, because DFO-LSA performs many times more iterations than 2PS, iteration overhead (working set updates and matrix inversions) is an issue here. For inexpensive targets it is possible for DFO-LSA to be outperformed by 2PS merely because of the linear algebra cost. RESTARTS Restarts is a strategy used to deal with random and deterministic noise in the target/constraints. Noise in the objective function can be random, arising from measurement or simulation uncertainty, or deterministic, resulting from complex underlying phenomena like numerical errors or branches in the target. Its influence is especially high at last stages of the optimization, when all computations are performed with small values of a trust radius. Restarts allow the optimization algorithm to be robust against both types of noise by temporarily increasing a trust radius in order to capture a global structure of the target and avoid being trapped by noise-produced local features. A restart is usually performed when the stopping criteria are triggered. Instead of stopping, the solver increases trust radius to its initial value and tries to rebuild a model. If you decide to optimize with restarts, it is recommended to perform a small amount of restarts, up to 5. Generally, restarts do not allow one to completely solve the problem of noise, but still it is possible to achieve some additional progress. -- ALGLIB -- Copyright 15.10.2023 by Bochkanov Sergey *************************************************************************/
void nlssetalgodfolsa(nlsstate &state, const ae_int_t nnoisyrestarts, const xparams _xparams = alglib::xdefault); void nlssetalgodfolsa(nlsstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets box constraints Box constraints are inactive by default (after initial creation). They are preserved until explicitly turned off with another SetBC() call. INPUT PARAMETERS: State - structure stores algorithm state BndL - lower bounds, array[N]. If some (all) variables are unbounded, you may specify very small number or -INF (the latter is recommended). BndU - upper bounds, array[N]. If some (all) variables are unbounded, you may specify very large number or +INF (the latter is recommended). NOTE 1: it is possible to specify BndL[i]=BndU[i]. In this case I-th variable will be "frozen" at X[i]=BndL[i]=BndU[i]. NOTE 2: unless explicitly mentioned in the specific NLS algorithm description, the following holds: * box constraints are always satisfied exactly * the target is NOT evaluated outside of the box-constrained area -- ALGLIB -- Copyright 15.10.2023 by Bochkanov Sergey *************************************************************************/
void nlssetbc(nlsstate &state, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets stopping conditions INPUT PARAMETERS: State - structure which stores algorithm state EpsX - stop when the scaled trust region radius is smaller than EpsX. MaxIts - maximum number of iterations. If MaxIts=0, the number of iterations is unlimited. Passing EpsX=0 and MaxIts=0 (simultaneously) will lead to automatic stopping criterion selection (small EpsX). -- ALGLIB -- Copyright 15.10.2023 by Bochkanov Sergey *************************************************************************/
void nlssetcond(nlsstate &state, const double epsx, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets variable scales ALGLIB optimizers use scaling matrices to test stopping conditions (step size and gradient are scaled before comparison with tolerances). Scale of the I-th variable is a translation invariant measure of: a) "how large" the variable is b) how large the step should be to make significant changes in the function Generally, scale is NOT considered to be a form of preconditioner. But derivative-free optimizers often use scaling matrix both in the stopping condition tests and as a preconditioner. Proper scaling is very important for the algorithm performance. It is less important for the quality of results, but still has some influence (it is easier to converge when variables are properly scaled, so premature stopping is possible when very badly scalled variables are combined with relaxed stopping conditions). INPUT PARAMETERS: State - structure stores algorithm state S - array[N], non-zero scaling coefficients S[i] may be negative, sign doesn't matter. -- ALGLIB -- Copyright 15.10.2023 by Bochkanov Sergey *************************************************************************/
void nlssetscale(nlsstate &state, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function turns on/off reporting. INPUT PARAMETERS: State - structure which stores algorithm state NeedXRep- whether iteration reports are needed or not If NeedXRep is True, algorithm will call rep() callback function if it is provided to NLSOptimize(). -- ALGLIB -- Copyright 15.10.2023 by Bochkanov Sergey *************************************************************************/
void nlssetxrep(nlsstate &state, const bool needxrep, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "optimization.h"

using namespace alglib;
void  function1_fvec(const real_1d_array &x, real_1d_array &fi, void *ptr)
{
    //
    // this callback calculates
    // f0(x0,x1) = 100*(x0+3)^4,
    // f1(x0,x1) = (x1-3)^4
    //
    fi[0] = 10*pow(x[0]+3,2);
    fi[1] = pow(x[1]-3,2);
}
int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates minimization of F(x0,x1) = f0^2+f1^2, where 
        //
        //     f0(x0,x1) = 10*(x0+3)^2
        //     f1(x0,x1) = (x1-3)^2
        //
        // subject to box constraints
        //
        //     -1 <= x0 <= +1
        //     -1 <= x1 <= +1
        //
        // using DFO mode of the NLS optimizer.
        //
        // IMPORTANT: the  NLS  optimizer   supports   parallel  model  evaluation
        //            ('callback parallelism'). This feature, which  is present in
        //            commercial ALGLIB editions, greatly accelerates optimization
        //            when  using  a  solver  which  issues  batch  requests, i.e.
        //            multiple requests  for  target values, which can be computed
        //            independently by different threads.
        //
        //            Callback parallelism is usually  beneficial when  processing
        //            a  batch  request  requires  more than several milliseconds.
        //            This particular  example,  of  course,  is  not  suited  for
        //            callback parallelism.
        //
        //            It  also  requires  the  solver  which  issues  requests  in
        //            convenient batches, e.g. 2PS solver.
        //
        //            See ALGLIB Reference Manual, 'Working with commercial version'
        //            section,  and  comments  on  nlsoptimize() function for more
        //            information.
        //
        real_1d_array x = "[0,0]";
        real_1d_array s = "[1,1]";
        real_1d_array bndl = "[-1,-1]";
        real_1d_array bndu = "[+1,+1]";
        double epsx = 0.0000001;
        ae_int_t maxits = 0;
        nlsstate state;
        nlsreport rep;

        //
        // Create optimizer, tell it to:
        // * use derivative-free mode
        // * use unit scale for all variables (s is a unit vector)
        // * stop after short enough step (less than epsx)
        //
        nlscreatedfo(2, x, state);
        nlssetcond(state, epsx, maxits);
        nlssetscale(state, s);
        nlssetbc(state, bndl, bndu);

        //
        // Choose a derivative-free nonlinear least squares algorithm. ALGLIB
        // supports the following solvers:
        //
        // * DFO-LSA  - a modified version of DFO-LS (Cartis, Fiala, Marteau,
        //   Roberts), with "A" standing for ALGLIB in order to distinguish it
        //   from the original version. This algorithm achieves the smallest
        //   function evaluations count, but has relatively high iteration
        //   overhead and no callback parallelism potential (it issues target
        //   evaluation requests one by one, so they can not be parallelized).
        //   Recommended for expensive targets with no parallelism support.
        //
        // * 2PS (two-point stencil) - an easily parallelized algorithm
        //   developed by ALGLIB Project. It needs about 3x-4x more target
        //   evaluations than DFO-LSA (the ratio has no strong dependence on
        //   the problem size), however it issues target evaluation requests
        //   in large batches, so they can be computed in parallel. Additionally
        //   it has low iteration overhead, so it can be better suited for
        //   problems with cheap targets that DFO-LSA.
        //
        // Both solvers demonstrate quadratic convergence similarly to the
        // Levenberg-Marquardt method.
        //
        // The summary is:
        // * expensive target, no parallelism => DFO-LSA 
        // * expensive target, parallel callbacks => 2PS
        // * inexpensive target => most likely 2PS, maybe DFO-LSA
        //
        // The code below sets the algorithm to be DFO-LSA, then switches
        // it to 2PS.
        //
        nlssetalgodfolsa(state);
        nlssetalgo2ps(state);

        //
        // Solve the problem.
        //
        // The code below does not use parallelism. If you want to activate
        // callback parallelism, use commercial edition of ALGLIB and pass
        // alglib::parallelcallbacks as an additional parameter to nlsoptimize().
        //
        // Callback parallelism is intended for expensive problems where one
        // batch (~N target evaluations) takes tens and hundreds of milliseconds
        // to compute.
        //
        alglib::nlsoptimize(state, function1_fvec);

        //
        // Test optimization results
        //
        nlsresults(state, x, rep);
        printf("%s\n", x.tostring(2).c_str()); // EXPECTED: [-1,+1]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

bivariatenormalcdf
bivariatenormalpdf
errorfunction
errorfunctionc
inverf
invnormalcdf
invnormaldistribution
normalcdf
normaldistribution
normalpdf
/************************************************************************* Bivariate normal CDF Returns the area under the bivariate Gaussian PDF with correlation parameter equal to Rho, integrated from minus infinity to (x,y): x y - - 1 | | | | bvn(x,y,rho) = ------------------- | | f(u,v,rho)*du*dv 2pi*sqrt(1-rho^2) | | | | - - -INF -INF where ( u^2 - 2*rho*u*v + v^2 ) f(u,v,rho) = exp( - ----------------------- ) ( 2*(1-rho^2) ) with -1<rho<+1 and arbitrary x, y. This subroutine uses high-precision approximation scheme proposed by Alan Genz in "Numerical Computation of Rectangular Bivariate and Trivariate Normal and t probabilities", which computes CDF with absolute error roughly equal to 1e-14. This function won't fail as long as Rho is in (-1,+1) range. -- ALGLIB -- Copyright 15.11.2019 by Bochkanov Sergey *************************************************************************/
double bivariatenormalcdf(const double x, const double y, const double rho, const xparams _xparams = alglib::xdefault);
/************************************************************************* Bivariate normal PDF Returns probability density function of the bivariate Gaussian with correlation parameter equal to Rho: 1 ( x^2 - 2*rho*x*y + y^2 ) f(x,y,rho) = ----------------- * exp( - ----------------------- ) 2pi*sqrt(1-rho^2) ( 2*(1-rho^2) ) with -1<rho<+1 and arbitrary x, y. This function won't fail as long as Rho is in (-1,+1) range. -- ALGLIB -- Copyright 15.11.2019 by Bochkanov Sergey *************************************************************************/
double bivariatenormalpdf(const double x, const double y, const double rho, const xparams _xparams = alglib::xdefault);
/************************************************************************* Error function The integral is x - 2 | | 2 erf(x) = -------- | exp( - t ) dt. sqrt(pi) | | - 0 For 0 <= |x| < 1, erf(x) = x * P4(x**2)/Q5(x**2); otherwise erf(x) = 1 - erfc(x). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,1 30000 3.7e-16 1.0e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
double errorfunction(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Complementary error function 1 - erf(x) = inf. - 2 | | 2 erfc(x) = -------- | exp( - t ) dt sqrt(pi) | | - x For small x, erfc(x) = 1 - erf(x); otherwise rational approximations are computed. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0,26.6417 30000 5.7e-14 1.5e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
double errorfunctionc(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse of the error function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
double inverf(const double e, const xparams _xparams = alglib::xdefault);
/************************************************************************* Inverse of Normal CDF Returns the argument, x, for which the area under the Gaussian probability density function (integrated from minus infinity to x) is equal to y. For small arguments 0 < y < exp(-2), the program computes z = sqrt( -2.0 * log(y) ); then the approximation is x = z - log(z)/z - (1/z) P(1/z) / Q(1/z). There are two rational functions P/Q, one for 0 < y < exp(-32) and the other for y up to exp(-2). For larger arguments, w = y - 0.5, and x/sqrt(2pi) = w + w**3 R(w**2)/S(w**2)). ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE 0.125, 1 20000 7.2e-16 1.3e-16 IEEE 3e-308, 0.135 50000 4.6e-16 9.8e-17 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
double invnormalcdf(const double y0, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as invnormalcdf(), deprecated name *************************************************************************/
double invnormaldistribution(const double y0, const xparams _xparams = alglib::xdefault);
/************************************************************************* Normal distribution CDF Returns the area under the Gaussian probability density function, integrated from minus infinity to x: x - 1 | | 2 ndtr(x) = --------- | exp( - t /2 ) dt sqrt(2pi) | | - -inf. = ( 1 + erf(z) ) / 2 = erfc(z) / 2 where z = x/sqrt(2). Computation is via the functions erf and erfc. ACCURACY: Relative error: arithmetic domain # trials peak rms IEEE -13,0 30000 3.4e-14 6.7e-15 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
double normalcdf(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Same as normalcdf(), obsolete name. *************************************************************************/
double normaldistribution(const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* Normal distribution PDF Returns Gaussian probability density function: 1 f(x) = --------- * exp(-x^2/2) sqrt(2pi) Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1988, 1992, 2000 by Stephen L. Moshier *************************************************************************/
double normalpdf(const double x, const xparams _xparams = alglib::xdefault);
normestimatorstate
normestimatorcreate
normestimatorestimatesparse
normestimatorresults
normestimatorsetseed
/************************************************************************* This object stores state of the iterative norm estimation algorithm. You should use ALGLIB functions to work with this object. *************************************************************************/
class normestimatorstate { public: normestimatorstate(); normestimatorstate(const normestimatorstate &rhs); normestimatorstate& operator=(const normestimatorstate &rhs); virtual ~normestimatorstate(); };
/************************************************************************* This procedure initializes matrix norm estimator. USAGE: 1. User initializes algorithm state with NormEstimatorCreate() call 2. User calls NormEstimatorEstimateSparse() (or NormEstimatorIteration()) 3. User calls NormEstimatorResults() to get solution. INPUT PARAMETERS: M - number of rows in the matrix being estimated, M>0 N - number of columns in the matrix being estimated, N>0 NStart - number of random starting vectors recommended value - at least 5. NIts - number of iterations to do with best starting vector recommended value - at least 5. OUTPUT PARAMETERS: State - structure which stores algorithm state NOTE: this algorithm is effectively deterministic, i.e. it always returns same result when repeatedly called for the same matrix. In fact, algorithm uses randomized starting vectors, but internal random numbers generator always generates same sequence of the random values (it is a feature, not bug). Algorithm can be made non-deterministic with NormEstimatorSetSeed(0) call. -- ALGLIB -- Copyright 06.12.2011 by Bochkanov Sergey *************************************************************************/
void normestimatorcreate(const ae_int_t m, const ae_int_t n, const ae_int_t nstart, const ae_int_t nits, normestimatorstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function estimates norm of the sparse M*N matrix A. INPUT PARAMETERS: State - norm estimator state, must be initialized with a call to NormEstimatorCreate() A - sparse M*N matrix, must be converted to CRS format prior to calling this function. After this function is over you can call NormEstimatorResults() to get estimate of the norm(A). -- ALGLIB -- Copyright 06.12.2011 by Bochkanov Sergey *************************************************************************/
void normestimatorestimatesparse(normestimatorstate &state, const sparsematrix &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* Matrix norm estimation results INPUT PARAMETERS: State - algorithm state OUTPUT PARAMETERS: Nrm - estimate of the matrix norm, Nrm>=0 -- ALGLIB -- Copyright 06.12.2011 by Bochkanov Sergey *************************************************************************/
void normestimatorresults(const normestimatorstate &state, double &nrm, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function changes seed value used by algorithm. In some cases we need deterministic processing, i.e. subsequent calls must return equal results, in other cases we need non-deterministic algorithm which returns different results for the same matrix on every pass. Setting zero seed will lead to non-deterministic algorithm, while non-zero value will make our algorithm deterministic. INPUT PARAMETERS: State - norm estimator state, must be initialized with a call to NormEstimatorCreate() SeedVal - seed value, >=0. Zero value = non-deterministic algo. -- ALGLIB -- Copyright 06.12.2011 by Bochkanov Sergey *************************************************************************/
void normestimatorsetseed(normestimatorstate &state, const ae_int_t seedval, const xparams _xparams = alglib::xdefault);
odesolverreport
odesolverstate
odesolveriteration
odesolverresults
odesolverrkck
odesolversolve
odesolver_d1 Solving y'=-y with ODE solver
/************************************************************************* *************************************************************************/
class odesolverreport { public: odesolverreport(); odesolverreport(const odesolverreport &rhs); odesolverreport& operator=(const odesolverreport &rhs); virtual ~odesolverreport(); ae_int_t nfev; ae_int_t terminationtype; };
/************************************************************************* *************************************************************************/
class odesolverstate { public: odesolverstate(); odesolverstate(const odesolverstate &rhs); odesolverstate& operator=(const odesolverstate &rhs); virtual ~odesolverstate(); };
/************************************************************************* This function provides reverse communication interface Reverse communication interface is not documented or recommended to use. See below for functions which provide better documented API *************************************************************************/
bool odesolveriteration(odesolverstate &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* ODE solver results Called after OdeSolverIteration returned False. INPUT PARAMETERS: State - algorithm state (used by OdeSolverIteration). OUTPUT PARAMETERS: M - number of tabulated values, M>=1 XTbl - array[0..M-1], values of X YTbl - array[0..M-1,0..N-1], values of Y in X[i] Rep - solver report: * Rep.TerminationType completetion code: * -2 X is not ordered by ascending/descending or there are non-distinct X[], i.e. X[i]=X[i+1] * -1 incorrect parameters were specified * 1 task has been solved * Rep.NFEV contains number of function calculations -- ALGLIB -- Copyright 01.09.2009 by Bochkanov Sergey *************************************************************************/
void odesolverresults(const odesolverstate &state, ae_int_t &m, real_1d_array &xtbl, real_2d_array &ytbl, odesolverreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Cash-Karp adaptive ODE solver. This subroutine solves ODE Y'=f(Y,x) with initial conditions Y(xs)=Ys (here Y may be single variable or vector of N variables). INPUT PARAMETERS: Y - initial conditions, array[0..N-1]. contains values of Y[] at X[0] N - system size X - points at which Y should be tabulated, array[0..M-1] integrations starts at X[0], ends at X[M-1], intermediate values at X[i] are returned too. SHOULD BE ORDERED BY ASCENDING OR BY DESCENDING! M - number of intermediate points + first point + last point: * M>2 means that you need both Y(X[M-1]) and M-2 values at intermediate points * M=2 means that you want just to integrate from X[0] to X[1] and don't interested in intermediate values. * M=1 means that you don't want to integrate :) it is degenerate case, but it will be handled correctly. * M<1 means error Eps - tolerance (absolute/relative error on each step will be less than Eps). When passing: * Eps>0, it means desired ABSOLUTE error * Eps<0, it means desired RELATIVE error. Relative errors are calculated with respect to maximum values of Y seen so far. Be careful to use this criterion when starting from Y[] that are close to zero. H - initial step lenth, it will be adjusted automatically after the first step. If H=0, step will be selected automatically (usualy it will be equal to 0.001 of min(x[i]-x[j])). OUTPUT PARAMETERS State - structure which stores algorithm state between subsequent calls of OdeSolverIteration. Used for reverse communication. This structure should be passed to the OdeSolverIteration subroutine. SEE ALSO AutoGKSmoothW, AutoGKSingular, AutoGKIteration, AutoGKResults. -- ALGLIB -- Copyright 01.09.2009 by Bochkanov Sergey *************************************************************************/
void odesolverrkck(const real_1d_array &y, const ae_int_t n, const real_1d_array &x, const ae_int_t m, const double eps, const double h, odesolverstate &state, const xparams _xparams = alglib::xdefault); void odesolverrkck(const real_1d_array &y, const real_1d_array &x, const double eps, const double h, odesolverstate &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function is used to start iterations of the ODE solver It accepts following parameters: diff - callback which calculates dy/dx for given y and x ptr - optional pointer which is passed to diff; can be NULL -- ALGLIB -- Copyright 01.09.2009 by Bochkanov Sergey *************************************************************************/
void odesolversolve(odesolverstate &state, void (*diff)(const real_1d_array &y, double x, real_1d_array &dy, void *ptr), void *ptr = NULL, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "diffequations.h"

using namespace alglib;
void ode_function_1_diff(const real_1d_array &y, double x, real_1d_array &dy, void *ptr) 
{
    // this callback calculates f(y[],x)=-y[0]
    dy[0] = -y[0];
}
int main(int argc, char **argv)
{
    try
    {
        real_1d_array y = "[1]";
        real_1d_array x = "[0, 1, 2, 3]";
        double eps = 0.00001;
        double h = 0;
        odesolverstate s;
        ae_int_t m;
        real_1d_array xtbl;
        real_2d_array ytbl;
        odesolverreport rep;
        odesolverrkck(y, x, eps, h, s);
        alglib::odesolversolve(s, ode_function_1_diff);
        odesolverresults(s, m, xtbl, ytbl, rep);
        printf("%d\n", int(m)); // EXPECTED: 4
        printf("%s\n", xtbl.tostring(2).c_str()); // EXPECTED: [0, 1, 2, 3]
        printf("%s\n", ytbl.tostring(2).c_str()); // EXPECTED: [[1], [0.367], [0.135], [0.050]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

optguardnonc0report
optguardnonc1test0report
optguardnonc1test1report
optguardreport
/************************************************************************* This structure is used for detailed reporting about suspected C0 continuity violation. === WHAT IS TESTED ======================================================= C0 test studies function values (not gradient!) obtained during line searches and monitors estimate of the Lipschitz constant. Sudden spikes usually indicate that discontinuity was detected. === WHAT IS REPORTED ===================================================== Actually, report retrieval function returns TWO report structures: * one for most suspicious point found so far (one with highest change in the function value), so called "strongest" report * another one for most detailed line search (more function evaluations = easier to understand what's going on) which triggered test #0 criteria, so called "longest" report In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * fidx - is an index of the function (0 for target function, 1 or higher for nonlinear constraints) which is suspected of being "non-C1" * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], f[] - arrays of length CNT which store step lengths and function values at these points; f[i] is evaluated in x0+stp[i]*d. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. * inneriter, outeriter - inner and outer iteration indexes (can be -1 if no iteration information was specified) You can plot function values stored in stp[] and f[] arrays and study behavior of your function by your own eyes, just to be sure that test correctly reported C1 violation. -- ALGLIB -- Copyright 19.11.2018 by Bochkanov Sergey *************************************************************************/
class optguardnonc0report { public: optguardnonc0report(); optguardnonc0report(const optguardnonc0report &rhs); optguardnonc0report& operator=(const optguardnonc0report &rhs); virtual ~optguardnonc0report(); bool positive; ae_int_t fidx; real_1d_array x0; real_1d_array d; ae_int_t n; real_1d_array stp; real_1d_array f; ae_int_t cnt; ae_int_t stpidxa; ae_int_t stpidxb; ae_int_t inneriter; ae_int_t outeriter; };
/************************************************************************* This structure is used for detailed reporting about suspected C1 continuity violation as flagged by C1 test #0 (OptGuard has several tests for C1 continuity, this report is used by #0). === WHAT IS TESTED ======================================================= C1 test #0 studies function values (not gradient!) obtained during line searches and monitors behavior of directional derivative estimate. This test is less powerful than test #1, but it does not depend on gradient values and thus it is more robust against artifacts introduced by numerical differentiation. === WHAT IS REPORTED ===================================================== Actually, report retrieval function returns TWO report structures: * one for most suspicious point found so far (one with highest change in the directional derivative), so called "strongest" report * another one for most detailed line search (more function evaluations = easier to understand what's going on) which triggered test #0 criteria, so called "longest" report In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * fidx - is an index of the function (0 for target function, 1 or higher for nonlinear constraints) which is suspected of being "non-C1" * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], f[] - arrays of length CNT which store step lengths and function values at these points; f[i] is evaluated in x0+stp[i]*d. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. * inneriter, outeriter - inner and outer iteration indexes (can be -1 if no iteration information was specified) You can plot function values stored in stp[] and f[] arrays and study behavior of your function by your own eyes, just to be sure that test correctly reported C1 violation. -- ALGLIB -- Copyright 19.11.2018 by Bochkanov Sergey *************************************************************************/
class optguardnonc1test0report { public: optguardnonc1test0report(); optguardnonc1test0report(const optguardnonc1test0report &rhs); optguardnonc1test0report& operator=(const optguardnonc1test0report &rhs); virtual ~optguardnonc1test0report(); bool positive; ae_int_t fidx; real_1d_array x0; real_1d_array d; ae_int_t n; real_1d_array stp; real_1d_array f; ae_int_t cnt; ae_int_t stpidxa; ae_int_t stpidxb; ae_int_t inneriter; ae_int_t outeriter; };
/************************************************************************* This structure is used for detailed reporting about suspected C1 continuity violation as flagged by C1 test #1 (OptGuard has several tests for C1 continuity, this report is used by #1). === WHAT IS TESTED ======================================================= C1 test #1 studies individual components of the gradient as recorded during line searches. Upon discovering discontinuity in the gradient this test records specific component which was suspected (or one with highest indication of discontinuity if multiple components are suspected). When precise analytic gradient is provided this test is more powerful than test #0 which works with function values and ignores user-provided gradient. However, test #0 becomes more powerful when numerical differentiation is employed (in such cases test #1 detects higher levels of numerical noise and becomes too conservative). This test also tells specific components of the gradient which violate C1 continuity, which makes it more informative than #0, which just tells that continuity is violated. === WHAT IS REPORTED ===================================================== Actually, report retrieval function returns TWO report structures: * one for most suspicious point found so far (one with highest change in the directional derivative), so called "strongest" report * another one for most detailed line search (more function evaluations = easier to understand what's going on) which triggered test #1 criteria, so called "longest" report In both cases following fields are returned: * positive - is TRUE when test flagged suspicious point; FALSE if test did not notice anything (in the latter cases fields below are empty). * fidx - is an index of the function (0 for target function, 1 or higher for nonlinear constraints) which is suspected of being "non-C1" * vidx - is an index of the variable in [0,N) with nonsmooth derivative * x0[], d[] - arrays of length N which store initial point and direction for line search (d[] can be normalized, but does not have to) * stp[], g[] - arrays of length CNT which store step lengths and gradient values at these points; g[i] is evaluated in x0+stp[i]*d and contains vidx-th component of the gradient. * stpidxa, stpidxb - we suspect that function violates C1 continuity between steps #stpidxa and #stpidxb (usually we have stpidxb=stpidxa+3, with most likely position of the violation between stpidxa+1 and stpidxa+2. * inneriter, outeriter - inner and outer iteration indexes (can be -1 if no iteration information was specified) You can plot function values stored in stp[] and g[] arrays and study behavior of your function by your own eyes, just to be sure that test correctly reported C1 violation. -- ALGLIB -- Copyright 19.11.2018 by Bochkanov Sergey *************************************************************************/
class optguardnonc1test1report { public: optguardnonc1test1report(); optguardnonc1test1report(const optguardnonc1test1report &rhs); optguardnonc1test1report& operator=(const optguardnonc1test1report &rhs); virtual ~optguardnonc1test1report(); bool positive; ae_int_t fidx; ae_int_t vidx; real_1d_array x0; real_1d_array d; ae_int_t n; real_1d_array stp; real_1d_array g; ae_int_t cnt; ae_int_t stpidxa; ae_int_t stpidxb; ae_int_t inneriter; ae_int_t outeriter; };
/************************************************************************* This structure is used to store OptGuard report, i.e. report on the properties of the nonlinear function being optimized with ALGLIB. After you tell your optimizer to activate OptGuard this technology starts to silently monitor function values and gradients/Jacobians being passed all around during your optimization session. Depending on specific set of checks enabled OptGuard may perform additional function evaluations (say, about 3*N evaluations if you want to check analytic gradient for errors). Upon discovering that something strange happens (function values and/or gradient components change too sharply and/or unexpectedly) OptGuard sets one of the "suspicion flags" (without interrupting optimization session). After optimization is done, you can examine OptGuard report. Following report fields can be set: * nonc0suspected * nonc1suspected * badgradsuspected === WHAT CAN BE DETECTED WITH OptGuard INTEGRITY CHECKER ================= Following types of errors in your target function (constraints) can be caught: a) discontinuous functions ("non-C0" part of the report) b) functions with discontinuous derivative ("non-C1" part of the report) c) errors in the analytic gradient provided by user These types of errors result in optimizer stopping well before reaching solution (most often - right after encountering discontinuity). Type A errors are usually coding errors during implementation of the target function. Most "normal" problems involve continuous functions, and anyway you can't reliably optimize discontinuous function. Type B errors are either coding errors or (in case code itself is correct) evidence of the fact that your problem is an "incorrect" one. Most optimizers (except for ones provided by MINNS subpackage) do not support nonsmooth problems. Type C errors are coding errors which often prevent optimizer from making even one step or result in optimizing stopping too early, as soon as actual descent direction becomes too different from one suggested by user- supplied gradient. === WHAT IS REPORTED ===================================================== Following set of report fields deals with discontinuous target functions, ones not belonging to C0 continuity class: * nonc0suspected - is a flag which is set upon discovering some indication of the discontinuity. If this flag is false, the rest of "non-C0" fields should be ignored * nonc0fidx - is an index of the function (0 for target function, 1 or higher for nonlinear constraints) which is suspected of being "non-C0" * nonc0lipshitzc - a Lipchitz constant for a function which was suspected of being non-continuous. * nonc0test0positive - set to indicate specific test which detected continuity violation (test #0) Following set of report fields deals with discontinuous gradient/Jacobian, i.e. with functions violating C1 continuity: * nonc1suspected - is a flag which is set upon discovering some indication of the discontinuity. If this flag is false, the rest of "non-C1" fields should be ignored * nonc1fidx - is an index of the function (0 for target function, 1 or higher for nonlinear constraints) which is suspected of being "non-C1" * nonc1lipshitzc - a Lipchitz constant for a function gradient which was suspected of being non-smooth. * nonc1test0positive - set to indicate specific test which detected continuity violation (test #0) * nonc1test1positive - set to indicate specific test which detected continuity violation (test #1) Following set of report fields deals with errors in the gradient: * badgradsuspected - is a flad which is set upon discovering an error in the analytic gradient supplied by user * badgradfidx - index of the function with bad gradient (0 for target function, 1 or higher for nonlinear constraints) * badgradvidx - index of the variable * badgradxbase - location where Jacobian is tested * following matrices store user-supplied Jacobian and its numerical differentiation version (which is assumed to be free from the coding errors), both of them computed near the initial point: * badgraduser, an array[K,N], analytic Jacobian supplied by user * badgradnum, an array[K,N], numeric Jacobian computed by ALGLIB Here K is a total number of nonlinear functions (target + nonlinear constraints), N is a variable number. The element of badgraduser[] with index [badgradfidx,badgradvidx] is assumed to be wrong. More detailed error log can be obtained from optimizer by explicitly requesting reports for tests C0.0, C1.0, C1.1. -- ALGLIB -- Copyright 19.11.2018 by Bochkanov Sergey *************************************************************************/
class optguardreport { public: optguardreport(); optguardreport(const optguardreport &rhs); optguardreport& operator=(const optguardreport &rhs); virtual ~optguardreport(); bool nonc0suspected; bool nonc0test0positive; ae_int_t nonc0fidx; double nonc0lipschitzc; bool nonc1suspected; bool nonc1test0positive; bool nonc1test1positive; ae_int_t nonc1fidx; double nonc1lipschitzc; bool badgradsuspected; ae_int_t badgradfidx; ae_int_t badgradvidx; real_1d_array badgradxbase; real_2d_array badgraduser; real_2d_array badgradnum; };
lptestproblem
qpxproblem
lptestproblemcreate
lptestproblemgetm
lptestproblemgetn
lptestproblemgettargetf
lptestproblemhasknowntarget
lptestproblemserialize
lptestproblemsetbc
lptestproblemsetcost
lptestproblemsetlc2
lptestproblemsetscale
lptestproblemunserialize
qpxproblemaddqc2
qpxproblemcreate
qpxproblemgetbc
qpxproblemgetinitialpoint
qpxproblemgetlc2
qpxproblemgetlinearterm
qpxproblemgetmcc
qpxproblemgetmlc
qpxproblemgetmqc
qpxproblemgetn
qpxproblemgetorigin
qpxproblemgetqc2i
qpxproblemgetquadraticterm
qpxproblemgetscale
qpxproblemgettotalconstraints
qpxproblemhasinitialpoint
qpxproblemhasorigin
qpxproblemhasquadraticterm
qpxproblemhasscale
qpxproblemisquadraticobjective
qpxproblemsetbc
qpxproblemsetinitialpoint
qpxproblemsetlc2
qpxproblemsetlinearterm
qpxproblemsetorigin
qpxproblemsetquadraticterm
qpxproblemsetscale
xdbgminlpcreatefromtestproblem
/************************************************************************* This is a test problem class intended for internal performance tests. Never use it directly in your projects. *************************************************************************/
class lptestproblem { public: lptestproblem(); lptestproblem(const lptestproblem &rhs); lptestproblem& operator=(const lptestproblem &rhs); virtual ~lptestproblem(); };
/************************************************************************* A general QP problem (a linear/quadratic target subject to a mix of box, linear, quadratic and conic constraints). -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
class qpxproblem { public: qpxproblem(); qpxproblem(const qpxproblem &rhs); qpxproblem& operator=(const qpxproblem &rhs); virtual ~qpxproblem(); };
/************************************************************************* Initialize test LP problem. This function is intended for internal use by ALGLIB. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
void lptestproblemcreate(const ae_int_t n, const bool hasknowntarget, const double targetf, lptestproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Query test problem info This function is intended for internal use by ALGLIB. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
ae_int_t lptestproblemgetm(lptestproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Query test problem info This function is intended for internal use by ALGLIB. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
ae_int_t lptestproblemgetn(lptestproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Query test problem info This function is intended for internal use by ALGLIB. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
double lptestproblemgettargetf(lptestproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Query test problem info This function is intended for internal use by ALGLIB. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
bool lptestproblemhasknowntarget(lptestproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void lptestproblemserialize(const lptestproblem &obj, std::string &s_out); void lptestproblemserialize(const lptestproblem &obj, std::ostream &s_out);
/************************************************************************* Set box constraints for test LP problem This function is intended for internal use by ALGLIB. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
void lptestproblemsetbc(lptestproblem &p, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set cost for test LP problem This function is intended for internal use by ALGLIB. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
void lptestproblemsetcost(lptestproblem &p, const real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set box constraints for test LP problem This function is intended for internal use by ALGLIB. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
void lptestproblemsetlc2(lptestproblem &p, const sparsematrix &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t m, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set scale for test LP problem This function is intended for internal use by ALGLIB. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
void lptestproblemsetscale(lptestproblem &p, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void lptestproblemunserialize(const std::string &s_in, lptestproblem &obj); void lptestproblemunserialize(const std::istream &s_in, lptestproblem &obj);
/************************************************************************* Append two-sided quadratic constraint, same format as minqpaddqc2() -- ALGLIB -- Copyright 19.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemaddqc2(qpxproblem &p, const sparsematrix &q, const bool isupper, const real_1d_array &b, const double cl, const double cu, const bool applyorigin, const xparams _xparams = alglib::xdefault);
/************************************************************************* Initialize QPX problem. -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemcreate(const ae_int_t n, qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get box constraints -- ALGLIB -- Copyright 20.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemgetbc(qpxproblem &p, real_1d_array &bndl, real_1d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get initial point -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemgetinitialpoint(qpxproblem &p, real_1d_array &x0, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get linear constraints -- ALGLIB -- Copyright 20.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemgetlc2(qpxproblem &p, sparsematrix &a, real_1d_array &al, real_1d_array &au, ae_int_t &m, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get linear term -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemgetlinearterm(qpxproblem &p, real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get conic constraints count -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t qpxproblemgetmcc(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get linear constraints count -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t qpxproblemgetmlc(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get quadratic constraints count -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t qpxproblemgetmqc(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get variables count -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t qpxproblemgetn(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get origin -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemgetorigin(qpxproblem &p, real_1d_array &xorigin, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get IDX-th two-sided quadratic constraint, same format as minqpaddqc2(), except for the fact that it always returns isUpper=False, even if the original matrix was an upper triangular one. NOTE: this function is not optimized for big matrices. Whilst still having O(max(N,Nonzeros)) running time, it may be somewhat slow due to dynamic structures being used internally. -- ALGLIB -- Copyright 19.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemgetqc2i(qpxproblem &p, const ae_int_t idx, sparsematrix &q, bool &isupper, real_1d_array &b, double &cl, double &cu, bool &applyorigin, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get quadratic term, returns zero matrix if no quadratic term is present -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemgetquadraticterm(qpxproblem &p, sparsematrix &q, bool &isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get scale -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemgetscale(qpxproblem &p, real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get total constraints count -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
ae_int_t qpxproblemgettotalconstraints(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get initial point presence -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
bool qpxproblemhasinitialpoint(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get origin presence -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
bool qpxproblemhasorigin(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns False if no quadratic term was specified, or quadratic term is numerically zero. -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
bool qpxproblemhasquadraticterm(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Get scale presence -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
bool qpxproblemhasscale(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Returns objective type: True for zero/linear/constant. Present version does not return False. -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
bool qpxproblemisquadraticobjective(qpxproblem &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set box constraints -- ALGLIB -- Copyright 20.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemsetbc(qpxproblem &p, const real_1d_array &bndl, const real_1d_array &bndu, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set initial point -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemsetinitialpoint(qpxproblem &p, const real_1d_array &x0, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set linear constraints -- ALGLIB -- Copyright 20.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemsetlc2(qpxproblem &p, const sparsematrix &a, const real_1d_array &al, const real_1d_array &au, const ae_int_t m, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set linear term -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemsetlinearterm(qpxproblem &p, const real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set origin -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemsetorigin(qpxproblem &p, const real_1d_array &xorigin, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set quadratic term; Q can be in any sparse matrix format. Only one triangle (lower or upper) is referenced by this function. -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemsetquadraticterm(qpxproblem &p, const sparsematrix &q, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Set scale -- ALGLIB -- Copyright 25.08.2024 by Bochkanov Sergey *************************************************************************/
void qpxproblemsetscale(qpxproblem &p, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is internal function intended to be used only by ALGLIB itself. Although for technical reasons it is made publicly available (and has its own manual entry), you should never call it. -- ALGLIB -- Copyright 11.01.2011 by Bochkanov Sergey *************************************************************************/
void xdbgminlpcreatefromtestproblem(const lptestproblem &p, minlpstate &state, const xparams _xparams = alglib::xdefault);
cmatrixlq
cmatrixlqunpackl
cmatrixlqunpackq
cmatrixqr
cmatrixqrunpackq
cmatrixqrunpackr
hmatrixtd
hmatrixtdunpackq
rmatrixbd
rmatrixbdmultiplybyp
rmatrixbdmultiplybyq
rmatrixbdunpackdiagonals
rmatrixbdunpackpt
rmatrixbdunpackq
rmatrixhessenberg
rmatrixhessenbergunpackh
rmatrixhessenbergunpackq
rmatrixlq
rmatrixlqunpackl
rmatrixlqunpackq
rmatrixqr
rmatrixqrunpackq
rmatrixqrunpackr
smatrixtd
smatrixtdunpackq
/************************************************************************* LQ decomposition of a rectangular complex matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1] M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q and L in compact form Tau - array of scalar factors which are used to form matrix Q. Array whose indexes range within [0.. Min(M,N)-1] Matrix A is represented as A = LQ, where Q is an orthogonal matrix of size MxM, L - lower triangular (or lower trapezoid) matrix of size MxN. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/
void cmatrixlq(complex_2d_array &a, const ae_int_t m, const ae_int_t n, complex_1d_array &tau, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking of matrix L from the LQ decomposition of a matrix A Input parameters: A - matrices Q and L in compact form. Output of CMatrixLQ subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: L - matrix L, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void cmatrixlqunpackl(const complex_2d_array &a, const ae_int_t m, const ae_int_t n, complex_2d_array &l, const xparams _xparams = alglib::xdefault);
/************************************************************************* Partial unpacking of matrix Q from LQ decomposition of a complex matrix A. Input parameters: A - matrices Q and R in compact form. Output of CMatrixLQ subroutine . M - number of rows in matrix A. M>=0. N - number of columns in matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of CMatrixLQ subroutine . QRows - required number of rows in matrix Q. N>=QColumns>=0. Output parameters: Q - first QRows rows of matrix Q. Array whose index ranges within [0..QRows-1, 0..N-1]. If QRows=0, array isn't changed. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void cmatrixlqunpackq(const complex_2d_array &a, const ae_int_t m, const ae_int_t n, const complex_1d_array &tau, const ae_int_t qrows, complex_2d_array &q, const xparams _xparams = alglib::xdefault);
/************************************************************************* QR decomposition of a rectangular complex matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1] M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q and R in compact form Tau - array of scalar factors which are used to form matrix Q. Array whose indexes range within [0.. Min(M,N)-1] Matrix A is represented as A = QR, where Q is an orthogonal matrix of size MxM, R - upper triangular (or upper trapezoid) matrix of size MxN. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994 *************************************************************************/
void cmatrixqr(complex_2d_array &a, const ae_int_t m, const ae_int_t n, complex_1d_array &tau, const xparams _xparams = alglib::xdefault);
/************************************************************************* Partial unpacking of matrix Q from QR decomposition of a complex matrix A. Input parameters: A - matrices Q and R in compact form. Output of CMatrixQR subroutine . M - number of rows in matrix A. M>=0. N - number of columns in matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of CMatrixQR subroutine . QColumns - required number of columns in matrix Q. M>=QColumns>=0. Output parameters: Q - first QColumns columns of matrix Q. Array whose index ranges within [0..M-1, 0..QColumns-1]. If QColumns=0, array isn't changed. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void cmatrixqrunpackq(const complex_2d_array &a, const ae_int_t m, const ae_int_t n, const complex_1d_array &tau, const ae_int_t qcolumns, complex_2d_array &q, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking of matrix R from the QR decomposition of a matrix A Input parameters: A - matrices Q and R in compact form. Output of CMatrixQR subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: R - matrix R, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void cmatrixqrunpackr(const complex_2d_array &a, const ae_int_t m, const ae_int_t n, complex_2d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* Reduction of a Hermitian matrix which is given by its higher or lower triangular part to a real tridiagonal matrix using unitary similarity transformation: Q'*A*Q = T. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - matrix to be transformed array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then matrix A is given by its upper triangle, and the lower triangle is not used and not modified by the algorithm, and vice versa if IsUpper = False. Output parameters: A - matrices T and Q in compact form (see lower) Tau - array of factors which are forming matrices H(i) array with elements [0..N-2]. D - main diagonal of real symmetric matrix T. array with elements [0..N-1]. E - secondary diagonal of real symmetric matrix T. array with elements [0..N-2]. If IsUpper=True, the matrix Q is represented as a product of elementary reflectors Q = H(n-2) . . . H(2) H(0). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(i+1:n-1) = 0, v(i) = 1, v(0:i-1) is stored on exit in A(0:i-1,i+1), and tau in TAU(i). If IsUpper=False, the matrix Q is represented as a product of elementary reflectors Q = H(0) H(2) . . . H(n-2). Each H(i) has the form H(i) = I - tau * v * v' where tau is a complex scalar, and v is a complex vector with v(0:i) = 0, v(i+1) = 1, v(i+2:n-1) is stored on exit in A(i+2:n-1,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = 'U': if UPLO = 'L': ( d e v1 v2 v3 ) ( d ) ( d e v2 v3 ) ( e d ) ( d e v3 ) ( v0 e d ) ( d e ) ( v0 v1 e d ) ( d ) ( v0 v1 v2 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1992 *************************************************************************/
void hmatrixtd(complex_2d_array &a, const ae_int_t n, const bool isupper, complex_1d_array &tau, real_1d_array &d, real_1d_array &e, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking matrix Q which reduces a Hermitian matrix to a real tridiagonal form. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - the result of a HMatrixTD subroutine N - size of matrix A. IsUpper - storage format (a parameter of HMatrixTD subroutine) Tau - the result of a HMatrixTD subroutine Output parameters: Q - transformation matrix. array with elements [0..N-1, 0..N-1]. -- ALGLIB -- Copyright 2005-2010 by Bochkanov Sergey *************************************************************************/
void hmatrixtdunpackq(const complex_2d_array &a, const ae_int_t n, const bool isupper, const complex_1d_array &tau, complex_2d_array &q, const xparams _xparams = alglib::xdefault);
/************************************************************************* Reduction of a rectangular matrix to bidiagonal form The algorithm reduces the rectangular matrix A to bidiagonal form by orthogonal transformations P and Q: A = Q*B*(P^T). ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - source matrix. array[0..M-1, 0..N-1] M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q, B, P in compact form (see below). TauQ - scalar factors which are used to form matrix Q. TauP - scalar factors which are used to form matrix P. The main diagonal and one of the secondary diagonals of matrix A are replaced with bidiagonal matrix B. Other elements contain elementary reflections which form MxM matrix Q and NxN matrix P, respectively. If M>=N, B is the upper bidiagonal MxN matrix and is stored in the corresponding elements of matrix A. Matrix Q is represented as a product of elementary reflections Q = H(0)*H(1)*...*H(n-1), where H(i) = 1-tau*v*v'. Here tau is a scalar which is stored in TauQ[i], and vector v has the following structure: v(0:i-1)=0, v(i)=1, v(i+1:m-1) is stored in elements A(i+1:m-1,i). Matrix P is as follows: P = G(0)*G(1)*...*G(n-2), where G(i) = 1 - tau*u*u'. Tau is stored in TauP[i], u(0:i)=0, u(i+1)=1, u(i+2:n-1) is stored in elements A(i,i+2:n-1). If M<N, B is the lower bidiagonal MxN matrix and is stored in the corresponding elements of matrix A. Q = H(0)*H(1)*...*H(m-2), where H(i) = 1 - tau*v*v', tau is stored in TauQ, v(0:i)=0, v(i+1)=1, v(i+2:m-1) is stored in elements A(i+2:m-1,i). P = G(0)*G(1)*...*G(m-1), G(i) = 1-tau*u*u', tau is stored in TauP, u(0:i-1)=0, u(i)=1, u(i+1:n-1) is stored in A(i,i+1:n-1). EXAMPLE: m=6, n=5 (m > n): m=5, n=6 (m < n): ( d e u1 u1 u1 ) ( d u1 u1 u1 u1 u1 ) ( v1 d e u2 u2 ) ( e d u2 u2 u2 u2 ) ( v1 v2 d e u3 ) ( v1 e d u3 u3 u3 ) ( v1 v2 v3 d e ) ( v1 v2 e d u4 u4 ) ( v1 v2 v3 v4 d ) ( v1 v2 v3 e d u5 ) ( v1 v2 v3 v4 v5 ) Here vi and ui are vectors which form H(i) and G(i), and d and e - are the diagonal and off-diagonal elements of matrix B. -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University September 30, 1994. Sergey Bochkanov, ALGLIB project, translation from FORTRAN to pseudocode, 2007-2010. *************************************************************************/
void rmatrixbd(real_2d_array &a, const ae_int_t m, const ae_int_t n, real_1d_array &tauq, real_1d_array &taup, const xparams _xparams = alglib::xdefault);
/************************************************************************* Multiplication by matrix P which reduces matrix A to bidiagonal form. The algorithm allows pre- or post-multiply by P or P'. Input parameters: QP - matrices Q and P in compact form. Output of RMatrixBD subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUP - scalar factors which are used to form P. Output of RMatrixBD subroutine. Z - multiplied matrix. Array whose indexes range within [0..ZRows-1,0..ZColumns-1]. ZRows - number of rows in matrix Z. If FromTheRight=False, ZRows=N, otherwise ZRows can be arbitrary. ZColumns - number of columns in matrix Z. If FromTheRight=True, ZColumns=N, otherwise ZColumns can be arbitrary. FromTheRight - pre- or post-multiply. DoTranspose - multiply by P or P'. Output parameters: Z - product of Z and P. Array whose indexes range within [0..ZRows-1,0..ZColumns-1]. If ZRows=0 or ZColumns=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
void rmatrixbdmultiplybyp(const real_2d_array &qp, const ae_int_t m, const ae_int_t n, const real_1d_array &taup, real_2d_array &z, const ae_int_t zrows, const ae_int_t zcolumns, const bool fromtheright, const bool dotranspose, const xparams _xparams = alglib::xdefault);
/************************************************************************* Multiplication by matrix Q which reduces matrix A to bidiagonal form. The algorithm allows pre- or post-multiply by Q or Q'. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: QP - matrices Q and P in compact form. Output of ToBidiagonal subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUQ - scalar factors which are used to form Q. Output of ToBidiagonal subroutine. Z - multiplied matrix. array[0..ZRows-1,0..ZColumns-1] ZRows - number of rows in matrix Z. If FromTheRight=False, ZRows=M, otherwise ZRows can be arbitrary. ZColumns - number of columns in matrix Z. If FromTheRight=True, ZColumns=M, otherwise ZColumns can be arbitrary. FromTheRight - pre- or post-multiply. DoTranspose - multiply by Q or Q'. Output parameters: Z - product of Z and Q. Array[0..ZRows-1,0..ZColumns-1] If ZRows=0 or ZColumns=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
void rmatrixbdmultiplybyq(const real_2d_array &qp, const ae_int_t m, const ae_int_t n, const real_1d_array &tauq, real_2d_array &z, const ae_int_t zrows, const ae_int_t zcolumns, const bool fromtheright, const bool dotranspose, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking of the main and secondary diagonals of bidiagonal decomposition of matrix A. Input parameters: B - output of RMatrixBD subroutine. M - number of rows in matrix B. N - number of columns in matrix B. Output parameters: IsUpper - True, if the matrix is upper bidiagonal. otherwise IsUpper is False. D - the main diagonal. Array whose index ranges within [0..Min(M,N)-1]. E - the secondary diagonal (upper or lower, depending on the value of IsUpper). Array index ranges within [0..Min(M,N)-1], the last element is not used. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
void rmatrixbdunpackdiagonals(const real_2d_array &b, const ae_int_t m, const ae_int_t n, bool &isupper, real_1d_array &d, real_1d_array &e, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking matrix P which reduces matrix A to bidiagonal form. The subroutine returns transposed matrix P. Input parameters: QP - matrices Q and P in compact form. Output of ToBidiagonal subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUP - scalar factors which are used to form P. Output of ToBidiagonal subroutine. PTRows - required number of rows of matrix P^T. N >= PTRows >= 0. Output parameters: PT - first PTRows columns of matrix P^T Array[0..PTRows-1, 0..N-1] If PTRows=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
void rmatrixbdunpackpt(const real_2d_array &qp, const ae_int_t m, const ae_int_t n, const real_1d_array &taup, const ae_int_t ptrows, real_2d_array &pt, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking matrix Q which reduces a matrix to bidiagonal form. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: QP - matrices Q and P in compact form. Output of ToBidiagonal subroutine. M - number of rows in matrix A. N - number of columns in matrix A. TAUQ - scalar factors which are used to form Q. Output of ToBidiagonal subroutine. QColumns - required number of columns in matrix Q. M>=QColumns>=0. Output parameters: Q - first QColumns columns of matrix Q. Array[0..M-1, 0..QColumns-1] If QColumns=0, the array is not modified. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
void rmatrixbdunpackq(const real_2d_array &qp, const ae_int_t m, const ae_int_t n, const real_1d_array &tauq, const ae_int_t qcolumns, real_2d_array &q, const xparams _xparams = alglib::xdefault);
/************************************************************************* Reduction of a square matrix to upper Hessenberg form: Q'*A*Q = H, where Q is an orthogonal matrix, H - Hessenberg matrix. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - matrix A with elements [0..N-1, 0..N-1] N - size of matrix A. Output parameters: A - matrices Q and P in compact form (see below). Tau - array of scalar factors which are used to form matrix Q. Array whose index ranges within [0..N-2] Matrix H is located on the main diagonal, on the lower secondary diagonal and above the main diagonal of matrix A. The elements which are used to form matrix Q are situated in array Tau and below the lower secondary diagonal of matrix A as follows: Matrix Q is represented as a product of elementary reflections Q = H(0)*H(2)*...*H(n-2), where each H(i) is given by H(i) = 1 - tau * v * (v^T) where tau is a scalar stored in Tau[I]; v - is a real vector, so that v(0:i) = 0, v(i+1) = 1, v(i+2:n-1) stored in A(i+2:n-1,i). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1992 *************************************************************************/
void rmatrixhessenberg(real_2d_array &a, const ae_int_t n, real_1d_array &tau, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking matrix H (the result of matrix A reduction to upper Hessenberg form) Input parameters: A - output of RMatrixHessenberg subroutine. N - size of matrix A. Output parameters: H - matrix H. Array whose indexes range within [0..N-1, 0..N-1]. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
void rmatrixhessenbergunpackh(const real_2d_array &a, const ae_int_t n, real_2d_array &h, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking matrix Q which reduces matrix A to upper Hessenberg form ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - output of RMatrixHessenberg subroutine. N - size of matrix A. Tau - scalar factors which are used to form Q. Output of RMatrixHessenberg subroutine. Output parameters: Q - matrix Q. Array whose indexes range within [0..N-1, 0..N-1]. -- ALGLIB -- 2005-2010 Bochkanov Sergey *************************************************************************/
void rmatrixhessenbergunpackq(const real_2d_array &a, const ae_int_t n, const real_1d_array &tau, real_2d_array &q, const xparams _xparams = alglib::xdefault);
/************************************************************************* LQ decomposition of a rectangular matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices L and Q in compact form (see below) Tau - array of scalar factors which are used to form matrix Q. Array whose index ranges within [0..Min(M,N)-1]. Matrix A is represented as A = LQ, where Q is an orthogonal matrix of size MxM, L - lower triangular (or lower trapezoid) matrix of size M x N. The elements of matrix L are located on and below the main diagonal of matrix A. The elements which are located in Tau array and above the main diagonal of matrix A are used to form matrix Q as follows: Matrix Q is represented as a product of elementary reflections Q = H(k-1)*H(k-2)*...*H(1)*H(0), where k = min(m,n), and each H(i) is of the form H(i) = 1 - tau * v * (v^T) where tau is a scalar stored in Tau[I]; v - real vector, so that v(0:i-1)=0, v(i) = 1, v(i+1:n-1) stored in A(i,i+1:n-1). ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void rmatrixlq(real_2d_array &a, const ae_int_t m, const ae_int_t n, real_1d_array &tau, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking of matrix L from the LQ decomposition of a matrix A Input parameters: A - matrices Q and L in compact form. Output of RMatrixLQ subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: L - matrix L, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void rmatrixlqunpackl(const real_2d_array &a, const ae_int_t m, const ae_int_t n, real_2d_array &l, const xparams _xparams = alglib::xdefault);
/************************************************************************* Partial unpacking of matrix Q from the LQ decomposition of a matrix A Input parameters: A - matrices L and Q in compact form. Output of RMatrixLQ subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of the RMatrixLQ subroutine. QRows - required number of rows in matrix Q. N>=QRows>=0. Output parameters: Q - first QRows rows of matrix Q. Array whose indexes range within [0..QRows-1, 0..N-1]. If QRows=0, the array remains unchanged. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void rmatrixlqunpackq(const real_2d_array &a, const ae_int_t m, const ae_int_t n, const real_1d_array &tau, const ae_int_t qrows, real_2d_array &q, const xparams _xparams = alglib::xdefault);
/************************************************************************* QR decomposition of a rectangular matrix of size MxN Input parameters: A - matrix A whose indexes range within [0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. Output parameters: A - matrices Q and R in compact form (see below). Tau - array of scalar factors which are used to form matrix Q. Array whose index ranges within [0.. Min(M-1,N-1)]. Matrix A is represented as A = QR, where Q is an orthogonal matrix of size MxM, R - upper triangular (or upper trapezoid) matrix of size M x N. The elements of matrix R are located on and above the main diagonal of matrix A. The elements which are located in Tau array and below the main diagonal of matrix A are used to form matrix Q as follows: Matrix Q is represented as a product of elementary reflections Q = H(0)*H(2)*...*H(k-1), where k = min(m,n), and each H(i) is in the form H(i) = 1 - tau * v * (v^T) where tau is a scalar stored in Tau[I]; v - real vector, so that v(0:i-1) = 0, v(i) = 1, v(i+1:m-1) stored in A(i+1:m-1,i). ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void rmatrixqr(real_2d_array &a, const ae_int_t m, const ae_int_t n, real_1d_array &tau, const xparams _xparams = alglib::xdefault);
/************************************************************************* Partial unpacking of matrix Q from the QR decomposition of a matrix A Input parameters: A - matrices Q and R in compact form. Output of RMatrixQR subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Tau - scalar factors which are used to form Q. Output of the RMatrixQR subroutine. QColumns - required number of columns of matrix Q. M>=QColumns>=0. Output parameters: Q - first QColumns columns of matrix Q. Array whose indexes range within [0..M-1, 0..QColumns-1]. If QColumns=0, the array remains unchanged. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void rmatrixqrunpackq(const real_2d_array &a, const ae_int_t m, const ae_int_t n, const real_1d_array &tau, const ae_int_t qcolumns, real_2d_array &q, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking of matrix R from the QR decomposition of a matrix A Input parameters: A - matrices Q and R in compact form. Output of RMatrixQR subroutine. M - number of rows in given matrix A. M>=0. N - number of columns in given matrix A. N>=0. Output parameters: R - matrix R, array[0..M-1, 0..N-1]. -- ALGLIB routine -- 17.02.2010 Bochkanov Sergey *************************************************************************/
void rmatrixqrunpackr(const real_2d_array &a, const ae_int_t m, const ae_int_t n, real_2d_array &r, const xparams _xparams = alglib::xdefault);
/************************************************************************* Reduction of a symmetric matrix which is given by its higher or lower triangular part to a tridiagonal matrix using orthogonal similarity transformation: Q'*A*Q=T. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - matrix to be transformed array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. If IsUpper = True, then matrix A is given by its upper triangle, and the lower triangle is not used and not modified by the algorithm, and vice versa if IsUpper = False. Output parameters: A - matrices T and Q in compact form (see lower) Tau - array of factors which are forming matrices H(i) array with elements [0..N-2]. D - main diagonal of symmetric matrix T. array with elements [0..N-1]. E - secondary diagonal of symmetric matrix T. array with elements [0..N-2]. If IsUpper=True, the matrix Q is represented as a product of elementary reflectors Q = H(n-2) . . . H(2) H(0). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(i+1:n-1) = 0, v(i) = 1, v(0:i-1) is stored on exit in A(0:i-1,i+1), and tau in TAU(i). If IsUpper=False, the matrix Q is represented as a product of elementary reflectors Q = H(0) H(2) . . . H(n-2). Each H(i) has the form H(i) = I - tau * v * v' where tau is a real scalar, and v is a real vector with v(0:i) = 0, v(i+1) = 1, v(i+2:n-1) is stored on exit in A(i+2:n-1,i), and tau in TAU(i). The contents of A on exit are illustrated by the following examples with n = 5: if UPLO = 'U': if UPLO = 'L': ( d e v1 v2 v3 ) ( d ) ( d e v2 v3 ) ( e d ) ( d e v3 ) ( v0 e d ) ( d e ) ( v0 v1 e d ) ( d ) ( v0 v1 v2 e d ) where d and e denote diagonal and off-diagonal elements of T, and vi denotes an element of the vector defining H(i). -- LAPACK routine (version 3.0) -- Univ. of Tennessee, Univ. of California Berkeley, NAG Ltd., Courant Institute, Argonne National Lab, and Rice University October 31, 1992 *************************************************************************/
void smatrixtd(real_2d_array &a, const ae_int_t n, const bool isupper, real_1d_array &tau, real_1d_array &d, real_1d_array &e, const xparams _xparams = alglib::xdefault);
/************************************************************************* Unpacking matrix Q which reduces symmetric matrix to a tridiagonal form. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. Input parameters: A - the result of a SMatrixTD subroutine N - size of matrix A. IsUpper - storage format (a parameter of SMatrixTD subroutine) Tau - the result of a SMatrixTD subroutine Output parameters: Q - transformation matrix. array with elements [0..N-1, 0..N-1]. -- ALGLIB -- Copyright 2005-2010 by Bochkanov Sergey *************************************************************************/
void smatrixtdunpackq(const real_2d_array &a, const ae_int_t n, const bool isupper, const real_1d_array &tau, real_2d_array &q, const xparams _xparams = alglib::xdefault);
pspline2interpolant
pspline3interpolant
parametricrdpfixed
pspline2arclength
pspline2build
pspline2buildperiodic
pspline2calc
pspline2diff
pspline2diff2
pspline2parametervalues
pspline2tangent
pspline3arclength
pspline3build
pspline3buildperiodic
pspline3calc
pspline3diff
pspline3diff2
pspline3parametervalues
pspline3tangent
parametric_rdp Parametric Ramer-Douglas-Peucker approximation
/************************************************************************* Parametric spline inteprolant: 2-dimensional curve. You should not try to access its members directly - use PSpline2XXXXXXXX() functions instead. *************************************************************************/
class pspline2interpolant { public: pspline2interpolant(); pspline2interpolant(const pspline2interpolant &rhs); pspline2interpolant& operator=(const pspline2interpolant &rhs); virtual ~pspline2interpolant(); };
/************************************************************************* Parametric spline inteprolant: 3-dimensional curve. You should not try to access its members directly - use PSpline3XXXXXXXX() functions instead. *************************************************************************/
class pspline3interpolant { public: pspline3interpolant(); pspline3interpolant(const pspline3interpolant &rhs); pspline3interpolant& operator=(const pspline3interpolant &rhs); virtual ~pspline3interpolant(); };
/************************************************************************* This subroutine fits piecewise linear curve to points with Ramer-Douglas- Peucker algorithm. This function performs PARAMETRIC fit, i.e. it can be used to fit curves like circles. On input it accepts dataset which describes parametric multidimensional curve X(t), with X being vector, and t taking values in [0,N), where N is a number of points in dataset. As result, it returns reduced dataset X2, which can be used to build parametric curve X2(t), which approximates X(t) with desired precision (or has specified number of sections). INPUT PARAMETERS: X - array of multidimensional points: * at least N elements, leading N elements are used if more than N elements were specified * order of points is IMPORTANT because it is parametric fit * each row of array is one point which has D coordinates N - number of elements in X D - number of dimensions (elements per row of X) StopM - stopping condition - desired number of sections: * at most M sections are generated by this function * less than M sections can be generated if we have N<M (or some X are non-distinct). * zero StopM means that algorithm does not stop after achieving some pre-specified section count StopEps - stopping condition - desired precision: * algorithm stops after error in each section is at most Eps * zero Eps means that algorithm does not stop after achieving some pre-specified precision OUTPUT PARAMETERS: X2 - array of corner points for piecewise approximation, has length NSections+1 or zero (for NSections=0). Idx2 - array of indexes (parameter values): * has length NSections+1 or zero (for NSections=0). * each element of Idx2 corresponds to same-numbered element of X2 * each element of Idx2 is index of corresponding element of X2 at original array X, i.e. I-th row of X2 is Idx2[I]-th row of X. * elements of Idx2 can be treated as parameter values which should be used when building new parametric curve * Idx2[0]=0, Idx2[NSections]=N-1 NSections- number of sections found by algorithm, NSections<=M, NSections can be zero for degenerate datasets (N<=1 or all X[] are non-distinct). NOTE: algorithm stops after: a) dividing curve into StopM sections b) achieving required precision StopEps c) dividing curve into N-1 sections If both StopM and StopEps are non-zero, algorithm is stopped by the FIRST criterion which is satisfied. In case both StopM and StopEps are zero, algorithm stops because of (c). -- ALGLIB -- Copyright 02.10.2014 by Bochkanov Sergey *************************************************************************/
void parametricrdpfixed(const real_2d_array &x, const ae_int_t n, const ae_int_t d, const ae_int_t stopm, const double stopeps, real_2d_array &x2, integer_1d_array &idx2, ae_int_t &nsections, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function calculates arc length, i.e. length of curve between t=a and t=b. INPUT PARAMETERS: P - parametric spline interpolant A,B - parameter values corresponding to arc ends: * B>A will result in positive length returned * B<A will result in negative length returned RESULT: length of arc starting at T=A and ending at T=B. -- ALGLIB PROJECT -- Copyright 30.05.2010 by Bochkanov Sergey *************************************************************************/
double pspline2arclength(const pspline2interpolant &p, const double a, const double b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function builds non-periodic 2-dimensional parametric spline which starts at (X[0],Y[0]) and ends at (X[N-1],Y[N-1]). INPUT PARAMETERS: XY - points, array[0..N-1,0..1]. XY[I,0:1] corresponds to the Ith point. Order of points is important! N - points count, N>=5 for Akima splines, N>=2 for other types of splines. ST - spline type: * 0 Akima spline * 1 parabolically terminated Catmull-Rom spline (Tension=0) * 2 parabolically terminated cubic spline PT - parameterization type: * 0 uniform * 1 chord length * 2 centripetal OUTPUT PARAMETERS: P - parametric spline interpolant NOTES: * this function assumes that there all consequent points are distinct. I.e. (x0,y0)<>(x1,y1), (x1,y1)<>(x2,y2), (x2,y2)<>(x3,y3) and so on. However, non-consequent points may coincide, i.e. we can have (x0,y0)= =(x2,y2). -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline2build(const real_2d_array &xy, const ae_int_t n, const ae_int_t st, const ae_int_t pt, pspline2interpolant &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function builds periodic 2-dimensional parametric spline which starts at (X[0],Y[0]), goes through all points to (X[N-1],Y[N-1]) and then back to (X[0],Y[0]). INPUT PARAMETERS: XY - points, array[0..N-1,0..1]. XY[I,0:1] corresponds to the Ith point. XY[N-1,0:1] must be different from XY[0,0:1]. Order of points is important! N - points count, N>=3 for other types of splines. ST - spline type: * 1 Catmull-Rom spline (Tension=0) with cyclic boundary conditions * 2 cubic spline with cyclic boundary conditions PT - parameterization type: * 0 uniform * 1 chord length * 2 centripetal OUTPUT PARAMETERS: P - parametric spline interpolant NOTES: * this function assumes that there all consequent points are distinct. I.e. (x0,y0)<>(x1,y1), (x1,y1)<>(x2,y2), (x2,y2)<>(x3,y3) and so on. However, non-consequent points may coincide, i.e. we can have (x0,y0)= =(x2,y2). * last point of sequence is NOT equal to the first point. You shouldn't make curve "explicitly periodic" by making them equal. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline2buildperiodic(const real_2d_array &xy, const ae_int_t n, const ae_int_t st, const ae_int_t pt, pspline2interpolant &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates the value of the parametric spline for a given value of parameter T INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-position Y - Y-position -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline2calc(const pspline2interpolant &p, const double t, double &x, double &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates derivative, i.e. it returns (dX/dT,dY/dT). INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - X-derivative Y - Y-value DY - Y-derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline2diff(const pspline2interpolant &p, const double t, double &x, double &dx, double &y, double &dy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates first and second derivative with respect to T. INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - derivative D2X - second derivative Y - Y-value DY - derivative D2Y - second derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline2diff2(const pspline2interpolant &p, const double t, double &x, double &dx, double &d2x, double &y, double &dy, double &d2y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns vector of parameter values correspoding to points. I.e. for P created from (X[0],Y[0])...(X[N-1],Y[N-1]) and U=TValues(P) we have (X[0],Y[0]) = PSpline2Calc(P,U[0]), (X[1],Y[1]) = PSpline2Calc(P,U[1]), (X[2],Y[2]) = PSpline2Calc(P,U[2]), ... INPUT PARAMETERS: P - parametric spline interpolant OUTPUT PARAMETERS: N - array size T - array[0..N-1] NOTES: * for non-periodic splines U[0]=0, U[0]<U[1]<...<U[N-1], U[N-1]=1 * for periodic splines U[0]=0, U[0]<U[1]<...<U[N-1], U[N-1]<1 -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline2parametervalues(const pspline2interpolant &p, ae_int_t &n, real_1d_array &t, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates tangent vector for a given value of parameter T INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-component of tangent vector (normalized) Y - Y-component of tangent vector (normalized) NOTE: X^2+Y^2 is either 1 (for non-zero tangent vector) or 0. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline2tangent(const pspline2interpolant &p, const double t, double &x, double &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates arc length, i.e. length of curve between t=a and t=b. INPUT PARAMETERS: P - parametric spline interpolant A,B - parameter values corresponding to arc ends: * B>A will result in positive length returned * B<A will result in negative length returned RESULT: length of arc starting at T=A and ending at T=B. -- ALGLIB PROJECT -- Copyright 30.05.2010 by Bochkanov Sergey *************************************************************************/
double pspline3arclength(const pspline3interpolant &p, const double a, const double b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function builds non-periodic 3-dimensional parametric spline which starts at (X[0],Y[0],Z[0]) and ends at (X[N-1],Y[N-1],Z[N-1]). Same as PSpline2Build() function, but for 3D, so we won't duplicate its description here. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline3build(const real_2d_array &xy, const ae_int_t n, const ae_int_t st, const ae_int_t pt, pspline3interpolant &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function builds periodic 3-dimensional parametric spline which starts at (X[0],Y[0],Z[0]), goes through all points to (X[N-1],Y[N-1],Z[N-1]) and then back to (X[0],Y[0],Z[0]). Same as PSpline2Build() function, but for 3D, so we won't duplicate its description here. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline3buildperiodic(const real_2d_array &xy, const ae_int_t n, const ae_int_t st, const ae_int_t pt, pspline3interpolant &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates the value of the parametric spline for a given value of parameter T. INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-position Y - Y-position Z - Z-position -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline3calc(const pspline3interpolant &p, const double t, double &x, double &y, double &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates derivative, i.e. it returns (dX/dT,dY/dT,dZ/dT). INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - X-derivative Y - Y-value DY - Y-derivative Z - Z-value DZ - Z-derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline3diff(const pspline3interpolant &p, const double t, double &x, double &dx, double &y, double &dy, double &z, double &dz, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates first and second derivative with respect to T. INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-value DX - derivative D2X - second derivative Y - Y-value DY - derivative D2Y - second derivative Z - Z-value DZ - derivative D2Z - second derivative -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline3diff2(const pspline3interpolant &p, const double t, double &x, double &dx, double &d2x, double &y, double &dy, double &d2y, double &z, double &dz, double &d2z, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns vector of parameter values correspoding to points. Same as PSpline2ParameterValues(), but for 3D. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline3parametervalues(const pspline3interpolant &p, ae_int_t &n, real_1d_array &t, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates tangent vector for a given value of parameter T INPUT PARAMETERS: P - parametric spline interpolant T - point: * T in [0,1] corresponds to interval spanned by points * for non-periodic splines T<0 (or T>1) correspond to parts of the curve before the first (after the last) point * for periodic splines T<0 (or T>1) are projected into [0,1] by making T=T-floor(T). OUTPUT PARAMETERS: X - X-component of tangent vector (normalized) Y - Y-component of tangent vector (normalized) Z - Z-component of tangent vector (normalized) NOTE: X^2+Y^2+Z^2 is either 1 (for non-zero tangent vector) or 0. -- ALGLIB PROJECT -- Copyright 28.05.2010 by Bochkanov Sergey *************************************************************************/
void pspline3tangent(const pspline3interpolant &p, const double t, double &x, double &y, double &z, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use RDP algorithm to approximate parametric 2D curve given by
        // locations in t=0,1,2,3 (see below), which form piecewise linear
        // trajectory through D-dimensional space (2-dimensional in our example).
        // 
        //     |
        //     |
        //     -     *     *     X2................X3
        //     |                .
        //     |               .
        //     -     *     *  .  *     *     *     *
        //     |             .
        //     |            .
        //     -     *     X1    *     *     *     *
        //     |      .....
        //     |  ....
        //     X0----|-----|-----|-----|-----|-----|---
        //
        ae_int_t npoints = 4;
        ae_int_t ndimensions = 2;
        real_2d_array x = "[[0,0],[2,1],[3,3],[6,3]]";

        //
        // Approximation of parametric curve is performed by another parametric curve
        // with lesser amount of points. It allows to work with "compressed"
        // representation, which needs smaller amount of memory. Say, in our example
        // (we allow points with error smaller than 0.8) approximation will have
        // just two sequential sections connecting X0 with X2, and X2 with X3.
        // 
        //     |
        //     |
        //     -     *     *     X2................X3
        //     |               . 
        //     |             .  
        //     -     *     .     *     *     *     *
        //     |         .    
        //     |       .     
        //     -     .     X1    *     *     *     *
        //     |   .       
        //     | .    
        //     X0----|-----|-----|-----|-----|-----|---
        //
        //
        real_2d_array y;
        integer_1d_array idxy;
        ae_int_t nsections;
        ae_int_t limitcnt = 0;
        double limiteps = 0.8;
        parametricrdpfixed(x, npoints, ndimensions, limitcnt, limiteps, y, idxy, nsections);
        printf("%d\n", int(nsections)); // EXPECTED: 2
        printf("%s\n", idxy.tostring().c_str()); // EXPECTED: [0,2,3]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

pcabuildbasis
pcatruncatedsubspace
pcatruncatedsubspacesparse
/************************************************************************* Principal components analysis This function builds orthogonal basis where first axis corresponds to direction with maximum variance, second axis maximizes variance in the subspace orthogonal to first axis and so on. This function builds FULL basis, i.e. returns N vectors corresponding to ALL directions, no matter how informative. If you need just a few (say, 10 or 50) of the most important directions, you may find it faster to use one of the reduced versions: * pcatruncatedsubspace() - for subspace iteration based method It should be noted that, unlike LDA, PCA does not use class labels. INPUT PARAMETERS: X - dataset, array[NPoints,NVars]. matrix contains ONLY INDEPENDENT VARIABLES. NPoints - dataset size, NPoints>=0 NVars - number of independent variables, NVars>=1 OUTPUT PARAMETERS: S2 - array[NVars]. variance values corresponding to basis vectors. V - array[NVars,NVars] matrix, whose columns store basis vectors. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 25.08.2008 by Bochkanov Sergey *************************************************************************/
void pcabuildbasis(const real_2d_array &x, const ae_int_t npoints, const ae_int_t nvars, real_1d_array &s2, real_2d_array &v, const xparams _xparams = alglib::xdefault); void pcabuildbasis(const real_2d_array &x, real_1d_array &s2, real_2d_array &v, const xparams _xparams = alglib::xdefault);
/************************************************************************* Principal components analysis This function performs truncated PCA, i.e. returns just a few most important directions. Internally it uses iterative eigensolver which is very efficient when only a minor fraction of full basis is required. Thus, if you need full basis, it is better to use pcabuildbasis() function. It should be noted that, unlike LDA, PCA does not use class labels. INPUT PARAMETERS: X - dataset, array[0..NPoints-1,0..NVars-1]. matrix contains ONLY INDEPENDENT VARIABLES. NPoints - dataset size, NPoints>=0 NVars - number of independent variables, NVars>=1 NNeeded - number of requested components, in [1,NVars] range; this function is efficient only for NNeeded<<NVars. Eps - desired precision of vectors returned; underlying solver will stop iterations as soon as absolute error in corresponding singular values reduces to roughly eps*MAX(lambda[]), with lambda[] being array of eigen values. Zero value means that algorithm performs number of iterations specified by maxits parameter, without paying attention to precision. MaxIts - number of iterations performed by subspace iteration method. Zero value means that no limit on iteration count is placed (eps-based stopping condition is used). OUTPUT PARAMETERS: S2 - array[NNeeded]. Variance values corresponding to basis vectors. V - array[NVars,NNeeded] matrix, whose columns store basis vectors. NOTE: passing eps=0 and maxits=0 results in small eps being selected as stopping condition. Exact value of automatically selected eps is version- -dependent. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 10.01.2017 by Bochkanov Sergey *************************************************************************/
void pcatruncatedsubspace(const real_2d_array &x, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nneeded, const double eps, const ae_int_t maxits, real_1d_array &s2, real_2d_array &v, const xparams _xparams = alglib::xdefault); void pcatruncatedsubspace(const real_2d_array &x, const ae_int_t nneeded, const double eps, const ae_int_t maxits, real_1d_array &s2, real_2d_array &v, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse truncated principal components analysis This function performs sparse truncated PCA, i.e. returns just a few most important principal components for a sparse input X. Internally it uses iterative eigensolver which is very efficient when only a minor fraction of full basis is required. It should be noted that, unlike LDA, PCA does not use class labels. INPUT PARAMETERS: X - sparse dataset, sparse npoints*nvars matrix. It is recommended to use CRS sparse storage format; non-CRS input will be internally converted to CRS. Matrix contains ONLY INDEPENDENT VARIABLES, and must be EXACTLY npoints*nvars. NPoints - dataset size, NPoints>=0 NVars - number of independent variables, NVars>=1 NNeeded - number of requested components, in [1,NVars] range; this function is efficient only for NNeeded<<NVars. Eps - desired precision of vectors returned; underlying solver will stop iterations as soon as absolute error in corresponding singular values reduces to roughly eps*MAX(lambda[]), with lambda[] being array of eigen values. Zero value means that algorithm performs number of iterations specified by maxits parameter, without paying attention to precision. MaxIts - number of iterations performed by subspace iteration method. Zero value means that no limit on iteration count is placed (eps-based stopping condition is used). OUTPUT PARAMETERS: S2 - array[NNeeded]. Variance values corresponding to basis vectors. V - array[NVars,NNeeded] matrix, whose columns store basis vectors. NOTE: passing eps=0 and maxits=0 results in small eps being selected as a stopping condition. Exact value of automatically selected eps is version-dependent. NOTE: zero MaxIts is silently replaced by some reasonable value which prevents eternal loops (possible when inputs are degenerate and too stringent stopping criteria are specified). In current version it is 50+2*NVars. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB -- Copyright 10.01.2017 by Bochkanov Sergey *************************************************************************/
void pcatruncatedsubspacesparse(const sparsematrix &x, const ae_int_t npoints, const ae_int_t nvars, const ae_int_t nneeded, const double eps, const ae_int_t maxits, real_1d_array &s2, real_2d_array &v, const xparams _xparams = alglib::xdefault);
invpoissondistribution
poissoncdistribution
poissondistribution
/************************************************************************* Inverse Poisson distribution Finds the Poisson variable x such that the integral from 0 to x of the Poisson density is equal to the given probability y. This is accomplished using the inverse gamma integral function and the relation m = igami( k+1, y ). ACCURACY: See inverse incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double invpoissondistribution(const ae_int_t k, const double y, const xparams _xparams = alglib::xdefault);
/************************************************************************* Complemented Poisson distribution Returns the sum of the terms k+1 to infinity of the Poisson distribution: inf. j -- -m m > e -- -- j! j=k+1 The terms are not summed directly; instead the incomplete gamma integral is employed, according to the formula y = pdtrc( k, m ) = igam( k+1, m ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double poissoncdistribution(const ae_int_t k, const double m, const xparams _xparams = alglib::xdefault);
/************************************************************************* Poisson distribution Returns the sum of the first k+1 terms of the Poisson distribution: k j -- -m m > e -- -- j! j=0 The terms are not summed directly; instead the incomplete gamma integral is employed, according to the relation y = pdtr( k, m ) = igamc( k+1, m ). The arguments must both be positive. ACCURACY: See incomplete gamma function Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double poissondistribution(const ae_int_t k, const double m, const xparams _xparams = alglib::xdefault);
polynomialbar2cheb
polynomialbar2pow
polynomialbuild
polynomialbuildcheb1
polynomialbuildcheb2
polynomialbuildeqdist
polynomialcalccheb1
polynomialcalccheb2
polynomialcalceqdist
polynomialcheb2bar
polynomialpow2bar
polint_d_calcdiff Interpolation and differentiation using barycentric representation
polint_d_conv Conversion between power basis and barycentric representation
polint_d_spec Polynomial interpolation on special grids (equidistant, Chebyshev I/II)
/************************************************************************* Conversion from barycentric representation to Chebyshev basis. This function has O(N^2) complexity. INPUT PARAMETERS: P - polynomial in barycentric form A,B - base interval for Chebyshev polynomials (see below) A<>B OUTPUT PARAMETERS T - coefficients of Chebyshev representation; P(x) = sum { T[i]*Ti(2*(x-A)/(B-A)-1), i=0..N-1 }, where Ti - I-th Chebyshev polynomial. NOTES: barycentric interpolant passed as P may be either polynomial obtained from polynomial interpolation/ fitting or rational function which is NOT polynomial. We can't distinguish between these two cases, and this algorithm just tries to work assuming that P IS a polynomial. If not, algorithm will return results, but they won't have any meaning. -- ALGLIB -- Copyright 30.09.2010 by Bochkanov Sergey *************************************************************************/
void polynomialbar2cheb(const barycentricinterpolant &p, const double a, const double b, real_1d_array &t, const xparams _xparams = alglib::xdefault);
/************************************************************************* Conversion from barycentric representation to power basis. This function has O(N^2) complexity. INPUT PARAMETERS: P - polynomial in barycentric form C - offset (see below); 0.0 is used as default value. S - scale (see below); 1.0 is used as default value. S<>0. OUTPUT PARAMETERS A - coefficients, P(x) = sum { A[i]*((X-C)/S)^i, i=0..N-1 } N - number of coefficients (polynomial degree plus 1) NOTES: 1. this function accepts offset and scale, which can be set to improve numerical properties of polynomial. For example, if P was obtained as result of interpolation on [-1,+1], you can set C=0 and S=1 and represent P as sum of 1, x, x^2, x^3 and so on. In most cases you it is exactly what you need. However, if your interpolation model was built on [999,1001], you will see significant growth of numerical errors when using {1, x, x^2, x^3} as basis. Representing P as sum of 1, (x-1000), (x-1000)^2, (x-1000)^3 will be better option. Such representation can be obtained by using 1000.0 as offset C and 1.0 as scale S. 2. power basis is ill-conditioned and tricks described above can't solve this problem completely. This function will return coefficients in any case, but for N>8 they will become unreliable. However, N's less than 5 are pretty safe. 3. barycentric interpolant passed as P may be either polynomial obtained from polynomial interpolation/ fitting or rational function which is NOT polynomial. We can't distinguish between these two cases, and this algorithm just tries to work assuming that P IS a polynomial. If not, algorithm will return results, but they won't have any meaning. -- ALGLIB -- Copyright 30.09.2010 by Bochkanov Sergey *************************************************************************/
void polynomialbar2pow(const barycentricinterpolant &p, const double c, const double s, real_1d_array &a, const xparams _xparams = alglib::xdefault); void polynomialbar2pow(const barycentricinterpolant &p, real_1d_array &a, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Lagrange intepolant: generation of the model on the general grid. This function has O(N^2) complexity. INPUT PARAMETERS: X - abscissas, array[0..N-1] Y - function values, array[0..N-1] N - number of points, N>=1 OUTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
void polynomialbuild(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault); void polynomialbuild(const real_1d_array &x, const real_1d_array &y, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Lagrange intepolant on Chebyshev grid (first kind). This function has O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] Y - function values at the nodes, array[0..N-1], Y[I] = Y(0.5*(B+A) + 0.5*(B-A)*Cos(PI*(2*i+1)/(2*n))) N - number of points, N>=1 for N=1 a constant model is constructed. OUTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 03.12.2009 by Bochkanov Sergey *************************************************************************/
void polynomialbuildcheb1(const double a, const double b, const real_1d_array &y, const ae_int_t n, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault); void polynomialbuildcheb1(const double a, const double b, const real_1d_array &y, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Lagrange intepolant on Chebyshev grid (second kind). This function has O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] Y - function values at the nodes, array[0..N-1], Y[I] = Y(0.5*(B+A) + 0.5*(B-A)*Cos(PI*i/(n-1))) N - number of points, N>=1 for N=1 a constant model is constructed. OUTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 03.12.2009 by Bochkanov Sergey *************************************************************************/
void polynomialbuildcheb2(const double a, const double b, const real_1d_array &y, const ae_int_t n, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault); void polynomialbuildcheb2(const double a, const double b, const real_1d_array &y, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Lagrange intepolant: generation of the model on equidistant grid. This function has O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] Y - function values at the nodes, array[0..N-1] N - number of points, N>=1 for N=1 a constant model is constructed. OUTPUT PARAMETERS P - barycentric model which represents Lagrange interpolant (see ratint unit info and BarycentricCalc() description for more information). -- ALGLIB -- Copyright 03.12.2009 by Bochkanov Sergey *************************************************************************/
void polynomialbuildeqdist(const double a, const double b, const real_1d_array &y, const ae_int_t n, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault); void polynomialbuildeqdist(const double a, const double b, const real_1d_array &y, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Fast polynomial interpolation function on Chebyshev points (first kind) with O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] F - function values, array[0..N-1] N - number of points on Chebyshev grid (first kind), X[i] = 0.5*(B+A) + 0.5*(B-A)*Cos(PI*(2*i+1)/(2*n)) for N=1 a constant model is constructed. T - position where P(x) is calculated RESULT value of the Lagrange interpolant at T IMPORTANT this function provides fast interface which is not overflow-safe nor it is very precise. the best option is to use PolIntBuildCheb1()/BarycentricCalc() subroutines unless you are pretty sure that your data will not result in overflow. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
double polynomialcalccheb1(const double a, const double b, const real_1d_array &f, const ae_int_t n, const double t, const xparams _xparams = alglib::xdefault); double polynomialcalccheb1(const double a, const double b, const real_1d_array &f, const double t, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Fast polynomial interpolation function on Chebyshev points (second kind) with O(N) complexity. INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] F - function values, array[0..N-1] N - number of points on Chebyshev grid (second kind), X[i] = 0.5*(B+A) + 0.5*(B-A)*Cos(PI*i/(n-1)) for N=1 a constant model is constructed. T - position where P(x) is calculated RESULT value of the Lagrange interpolant at T IMPORTANT this function provides fast interface which is not overflow-safe nor it is very precise. the best option is to use PolIntBuildCheb2()/BarycentricCalc() subroutines unless you are pretty sure that your data will not result in overflow. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
double polynomialcalccheb2(const double a, const double b, const real_1d_array &f, const ae_int_t n, const double t, const xparams _xparams = alglib::xdefault); double polynomialcalccheb2(const double a, const double b, const real_1d_array &f, const double t, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Fast equidistant polynomial interpolation function with O(N) complexity INPUT PARAMETERS: A - left boundary of [A,B] B - right boundary of [A,B] F - function values, array[0..N-1] N - number of points on equidistant grid, N>=1 for N=1 a constant model is constructed. T - position where P(x) is calculated RESULT value of the Lagrange interpolant at T IMPORTANT this function provides fast interface which is not overflow-safe nor it is very precise. the best option is to use PolynomialBuildEqDist()/BarycentricCalc() subroutines unless you are pretty sure that your data will not result in overflow. -- ALGLIB -- Copyright 02.12.2009 by Bochkanov Sergey *************************************************************************/
double polynomialcalceqdist(const double a, const double b, const real_1d_array &f, const ae_int_t n, const double t, const xparams _xparams = alglib::xdefault); double polynomialcalceqdist(const double a, const double b, const real_1d_array &f, const double t, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Conversion from Chebyshev basis to barycentric representation. This function has O(N^2) complexity. INPUT PARAMETERS: T - coefficients of Chebyshev representation; P(x) = sum { T[i]*Ti(2*(x-A)/(B-A)-1), i=0..N }, where Ti - I-th Chebyshev polynomial. N - number of coefficients: * if given, only leading N elements of T are used * if not given, automatically determined from size of T A,B - base interval for Chebyshev polynomials (see above) A<B OUTPUT PARAMETERS P - polynomial in barycentric form -- ALGLIB -- Copyright 30.09.2010 by Bochkanov Sergey *************************************************************************/
void polynomialcheb2bar(const real_1d_array &t, const ae_int_t n, const double a, const double b, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault); void polynomialcheb2bar(const real_1d_array &t, const double a, const double b, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Conversion from power basis to barycentric representation. This function has O(N^2) complexity. INPUT PARAMETERS: A - coefficients, P(x) = sum { A[i]*((X-C)/S)^i, i=0..N-1 } N - number of coefficients (polynomial degree plus 1) * if given, only leading N elements of A are used * if not given, automatically determined from size of A C - offset (see below); 0.0 is used as default value. S - scale (see below); 1.0 is used as default value. S<>0. OUTPUT PARAMETERS P - polynomial in barycentric form NOTES: 1. this function accepts offset and scale, which can be set to improve numerical properties of polynomial. For example, if you interpolate on [-1,+1], you can set C=0 and S=1 and convert from sum of 1, x, x^2, x^3 and so on. In most cases you it is exactly what you need. However, if your interpolation model was built on [999,1001], you will see significant growth of numerical errors when using {1, x, x^2, x^3} as input basis. Converting from sum of 1, (x-1000), (x-1000)^2, (x-1000)^3 will be better option (you have to specify 1000.0 as offset C and 1.0 as scale S). 2. power basis is ill-conditioned and tricks described above can't solve this problem completely. This function will return barycentric model in any case, but for N>8 accuracy well degrade. However, N's less than 5 are pretty safe. -- ALGLIB -- Copyright 30.09.2010 by Bochkanov Sergey *************************************************************************/
void polynomialpow2bar(const real_1d_array &a, const ae_int_t n, const double c, const double s, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault); void polynomialpow2bar(const real_1d_array &a, barycentricinterpolant &p, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Here we demonstrate polynomial interpolation and differentiation
        // of y=x^2-x sampled at [0,1,2]. Barycentric representation of polynomial is used.
        //
        real_1d_array x = "[0,1,2]";
        real_1d_array y = "[0,0,2]";
        double t = -1;
        double v;
        double dv;
        double d2v;
        barycentricinterpolant p;

        // barycentric model is created
        polynomialbuild(x, y, p);

        // barycentric interpolation is demonstrated
        v = barycentriccalc(p, t);
        printf("%.4f\n", double(v)); // EXPECTED: 2.0

        // barycentric differentation is demonstrated
        barycentricdiff1(p, t, v, dv);
        printf("%.4f\n", double(v)); // EXPECTED: 2.0
        printf("%.4f\n", double(dv)); // EXPECTED: -3.0

        // second derivatives with barycentric representation
        barycentricdiff1(p, t, v, dv);
        printf("%.4f\n", double(v)); // EXPECTED: 2.0
        printf("%.4f\n", double(dv)); // EXPECTED: -3.0
        barycentricdiff2(p, t, v, dv, d2v);
        printf("%.4f\n", double(v)); // EXPECTED: 2.0
        printf("%.4f\n", double(dv)); // EXPECTED: -3.0
        printf("%.4f\n", double(d2v)); // EXPECTED: 2.0
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Here we demonstrate conversion of y=x^2-x
        // between power basis and barycentric representation.
        //
        real_1d_array a = "[0,-1,+1]";
        double t = 2;
        real_1d_array a2;
        double v;
        barycentricinterpolant p;

        //
        // a=[0,-1,+1] is decomposition of y=x^2-x in the power basis:
        //
        //     y = 0 - 1*x + 1*x^2
        //
        // We convert it to the barycentric form.
        //
        polynomialpow2bar(a, p);

        // now we have barycentric interpolation; we can use it for interpolation
        v = barycentriccalc(p, t);
        printf("%.2f\n", double(v)); // EXPECTED: 2.0

        // we can also convert back from barycentric representation to power basis
        polynomialbar2pow(p, a2);
        printf("%s\n", a2.tostring(2).c_str()); // EXPECTED: [0,-1,+1]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Temporaries:
        // * values of y=x^2-x sampled at three special grids:
        //   * equdistant grid spanning [0,2],     x[i] = 2*i/(N-1), i=0..N-1
        //   * Chebyshev-I grid spanning [-1,+1],  x[i] = 1 + Cos(PI*(2*i+1)/(2*n)), i=0..N-1
        //   * Chebyshev-II grid spanning [-1,+1], x[i] = 1 + Cos(PI*i/(n-1)), i=0..N-1
        // * barycentric interpolants for these three grids
        // * vectors to store coefficients of quadratic representation
        //
        real_1d_array y_eqdist = "[0,0,2]";
        real_1d_array y_cheb1 = "[-0.116025,0.000000,1.616025]";
        real_1d_array y_cheb2 = "[0,0,2]";
        barycentricinterpolant p_eqdist;
        barycentricinterpolant p_cheb1;
        barycentricinterpolant p_cheb2;
        real_1d_array a_eqdist;
        real_1d_array a_cheb1;
        real_1d_array a_cheb2;

        //
        // First, we demonstrate construction of barycentric interpolants on
        // special grids. We unpack power representation to ensure that
        // interpolant was built correctly.
        //
        // In all three cases we should get same quadratic function.
        //
        polynomialbuildeqdist(0.0, 2.0, y_eqdist, p_eqdist);
        polynomialbar2pow(p_eqdist, a_eqdist);
        printf("%s\n", a_eqdist.tostring(4).c_str()); // EXPECTED: [0,-1,+1]

        polynomialbuildcheb1(-1, +1, y_cheb1, p_cheb1);
        polynomialbar2pow(p_cheb1, a_cheb1);
        printf("%s\n", a_cheb1.tostring(4).c_str()); // EXPECTED: [0,-1,+1]

        polynomialbuildcheb2(-1, +1, y_cheb2, p_cheb2);
        polynomialbar2pow(p_cheb2, a_cheb2);
        printf("%s\n", a_cheb2.tostring(4).c_str()); // EXPECTED: [0,-1,+1]

        //
        // Now we demonstrate polynomial interpolation without construction 
        // of the barycentricinterpolant structure.
        //
        // We calculate interpolant value at x=-2.
        // In all three cases we should get same f=6
        //
        double t = -2;
        double v;
        v = polynomialcalceqdist(0.0, 2.0, y_eqdist, t);
        printf("%.4f\n", double(v)); // EXPECTED: 6.0

        v = polynomialcalccheb1(-1, +1, y_cheb1, t);
        printf("%.4f\n", double(v)); // EXPECTED: 6.0

        v = polynomialcalccheb2(-1, +1, y_cheb2, t);
        printf("%.4f\n", double(v)); // EXPECTED: 6.0
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

polynomialsolverreport
polynomialsolve
/************************************************************************* *************************************************************************/
class polynomialsolverreport { public: polynomialsolverreport(); polynomialsolverreport(const polynomialsolverreport &rhs); polynomialsolverreport& operator=(const polynomialsolverreport &rhs); virtual ~polynomialsolverreport(); double maxerr; };
/************************************************************************* Polynomial root finding. This function returns all roots of the polynomial P(x) = a0 + a1*x + a2*x^2 + ... + an*x^n Both real and complex roots are returned (see below). INPUT PARAMETERS: A - array[N+1], polynomial coefficients: * A[0] is constant term * A[N] is a coefficient of X^N N - polynomial degree OUTPUT PARAMETERS: X - array of complex roots: * for isolated real root, X[I] is strictly real: IMAGE(X[I])=0 * complex roots are always returned in pairs - roots occupy positions I and I+1, with: * X[I+1]=Conj(X[I]) * IMAGE(X[I]) > 0 * IMAGE(X[I+1]) = -IMAGE(X[I]) < 0 * multiple real roots may have non-zero imaginary part due to roundoff errors. There is no reliable way to distinguish real root of multiplicity 2 from two complex roots in the presence of roundoff errors. Rep - report, additional information, following fields are set: * Rep.MaxErr - max( |P(xi)| ) for i=0..N-1. This field allows to quickly estimate "quality" of the roots being returned. NOTE: this function uses companion matrix method to find roots. In case internal EVD solver fails do find eigenvalues, exception is generated. NOTE: roots are not "polished" and no matrix balancing is performed for them. -- ALGLIB -- Copyright 24.02.2014 by Bochkanov Sergey *************************************************************************/
void polynomialsolve(const real_1d_array &a, const ae_int_t n, complex_1d_array &x, polynomialsolverreport &rep, const xparams _xparams = alglib::xdefault);
psi
/************************************************************************* Psi (digamma) function d - psi(x) = -- ln | (x) dx is the logarithmic derivative of the gamma function. For integer x, n-1 - psi(n) = -EUL + > 1/k. - k=1 This formula is used for 0 < n <= 10. If x is negative, it is transformed to a positive argument by the reflection formula psi(1-x) = psi(x) + pi cot(pi x). For general positive x, the argument is made greater than 10 using the recurrence psi(x+1) = psi(x) + 1/x. Then the following asymptotic expansion is applied: inf. B - 2k psi(x) = log(x) - 1/2x - > ------- - 2k k=1 2k x where the B2k are Bernoulli numbers. ACCURACY: Relative error (except absolute when |psi| < 1): arithmetic domain # trials peak rms IEEE 0,30 30000 1.3e-15 1.4e-16 IEEE -30,0 40000 1.5e-15 2.2e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1992, 2000 by Stephen L. Moshier *************************************************************************/
double psi(const double x, const xparams _xparams = alglib::xdefault);
barycentricinterpolant
barycentricbuildfloaterhormann
barycentricbuildxyw
barycentriccalc
barycentricdiff1
barycentricdiff2
barycentriclintransx
barycentriclintransy
barycentricunpack
/************************************************************************* Barycentric interpolant. *************************************************************************/
class barycentricinterpolant { public: barycentricinterpolant(); barycentricinterpolant(const barycentricinterpolant &rhs); barycentricinterpolant& operator=(const barycentricinterpolant &rhs); virtual ~barycentricinterpolant(); };
/************************************************************************* Rational interpolant without poles The subroutine constructs the rational interpolating function without real poles (see 'Barycentric rational interpolation with no poles and high rates of approximation', Michael S. Floater. and Kai Hormann, for more information on this subject). Input parameters: X - interpolation nodes, array[0..N-1]. Y - function values, array[0..N-1]. N - number of nodes, N>0. D - order of the interpolation scheme, 0 <= D <= N-1. D<0 will cause an error. D>=N it will be replaced with D=N-1. if you don't know what D to choose, use small value about 3-5. Output parameters: B - barycentric interpolant. Note: this algorithm always succeeds and calculates the weights with close to machine precision. -- ALGLIB PROJECT -- Copyright 17.06.2007 by Bochkanov Sergey *************************************************************************/
void barycentricbuildfloaterhormann(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t d, barycentricinterpolant &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* Rational interpolant from X/Y/W arrays F(t) = SUM(i=0,n-1,w[i]*f[i]/(t-x[i])) / SUM(i=0,n-1,w[i]/(t-x[i])) INPUT PARAMETERS: X - interpolation nodes, array[0..N-1] F - function values, array[0..N-1] W - barycentric weights, array[0..N-1] N - nodes count, N>0 OUTPUT PARAMETERS: B - barycentric interpolant built from (X, Y, W) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void barycentricbuildxyw(const real_1d_array &x, const real_1d_array &y, const real_1d_array &w, const ae_int_t n, barycentricinterpolant &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* Rational interpolation using barycentric formula F(t) = SUM(i=0,n-1,w[i]*f[i]/(t-x[i])) / SUM(i=0,n-1,w[i]/(t-x[i])) Input parameters: B - barycentric interpolant built with one of model building subroutines. T - interpolation point Result: barycentric interpolant F(t) -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
double barycentriccalc(const barycentricinterpolant &b, const double t, const xparams _xparams = alglib::xdefault);
/************************************************************************* Differentiation of barycentric interpolant: first derivative. Algorithm used in this subroutine is very robust and should not fail until provided with values too close to MaxRealNumber (usually MaxRealNumber/N or greater will overflow). INPUT PARAMETERS: B - barycentric interpolant built with one of model building subroutines. T - interpolation point OUTPUT PARAMETERS: F - barycentric interpolant at T DF - first derivative NOTE -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void barycentricdiff1(const barycentricinterpolant &b, const double t, double &f, double &df, const xparams _xparams = alglib::xdefault);
/************************************************************************* Differentiation of barycentric interpolant: first/second derivatives. INPUT PARAMETERS: B - barycentric interpolant built with one of model building subroutines. T - interpolation point OUTPUT PARAMETERS: F - barycentric interpolant at T DF - first derivative D2F - second derivative NOTE: this algorithm may fail due to overflow/underflor if used on data whose values are close to MaxRealNumber or MinRealNumber. Use more robust BarycentricDiff1() subroutine in such cases. -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void barycentricdiff2(const barycentricinterpolant &b, const double t, double &f, double &df, double &d2f, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine performs linear transformation of the argument. INPUT PARAMETERS: B - rational interpolant in barycentric form CA, CB - transformation coefficients: x = CA*t + CB OUTPUT PARAMETERS: B - transformed interpolant with X replaced by T -- ALGLIB PROJECT -- Copyright 19.08.2009 by Bochkanov Sergey *************************************************************************/
void barycentriclintransx(barycentricinterpolant &b, const double ca, const double cb, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine performs linear transformation of the barycentric interpolant. INPUT PARAMETERS: B - rational interpolant in barycentric form CA, CB - transformation coefficients: B2(x) = CA*B(x) + CB OUTPUT PARAMETERS: B - transformed interpolant -- ALGLIB PROJECT -- Copyright 19.08.2009 by Bochkanov Sergey *************************************************************************/
void barycentriclintransy(barycentricinterpolant &b, const double ca, const double cb, const xparams _xparams = alglib::xdefault);
/************************************************************************* Extracts X/Y/W arrays from rational interpolant INPUT PARAMETERS: B - barycentric interpolant OUTPUT PARAMETERS: N - nodes count, N>0 X - interpolation nodes, array[0..N-1] F - function values, array[0..N-1] W - barycentric weights, array[0..N-1] -- ALGLIB -- Copyright 17.08.2009 by Bochkanov Sergey *************************************************************************/
void barycentricunpack(const barycentricinterpolant &b, ae_int_t &n, real_1d_array &x, real_1d_array &y, real_1d_array &w, const xparams _xparams = alglib::xdefault);
rbfcalcbuffer
rbfmodel
rbfreport
rbfbuildmodel
rbfcalc
rbfcalc1
rbfcalc2
rbfcalc3
rbfcalcbuf
rbfcreate
rbfcreatecalcbuffer
rbfdiff
rbfdiff1
rbfdiff2
rbfdiff3
rbfdiffbuf
rbffastcalc
rbfgetmodelversion
rbfgridcalc2
rbfgridcalc2v
rbfgridcalc2vsubset
rbfgridcalc3v
rbfgridcalc3vsubset
rbfhess
rbfhessbuf
rbfpeekprogress
rbfrequesttermination
rbfserialize
rbfsetalgobiharmonic
rbfsetalgohierarchical
rbfsetalgomultilayer
rbfsetalgomultiquadricauto
rbfsetalgomultiquadricmanual
rbfsetalgoqnn
rbfsetalgothinplatespline
rbfsetconstterm
rbfsetfastevaltol
rbfsetlinterm
rbfsetpoints
rbfsetpointsandscales
rbfsetv2bf
rbfsetv2its
rbfsetv2supportr
rbfsetv3tol
rbfsetzeroterm
rbftscalcbuf
rbftsdiffbuf
rbftshessbuf
rbfunpack
rbfunserialize
rbf_d_hrbf Simple model built with HRBF algorithm
rbf_d_polterm RBF models - working with polynomial term
rbf_d_serialize Serialization/unserialization
rbf_d_vector Working with vector functions
/************************************************************************* Buffer object which is used to perform RBF model calculation in the multithreaded mode (multiple threads working with same RBF object). This object should be created with RBFCreateCalcBuffer(). *************************************************************************/
class rbfcalcbuffer { public: rbfcalcbuffer(); rbfcalcbuffer(const rbfcalcbuffer &rhs); rbfcalcbuffer& operator=(const rbfcalcbuffer &rhs); virtual ~rbfcalcbuffer(); };
/************************************************************************* RBF model. Never try to directly work with fields of this object - always use ALGLIB functions to use this object. *************************************************************************/
class rbfmodel { public: rbfmodel(); rbfmodel(const rbfmodel &rhs); rbfmodel& operator=(const rbfmodel &rhs); virtual ~rbfmodel(); };
/************************************************************************* RBF solution report: * TerminationType - termination type, positive values - success, non-positive - failure. Fields which are set by modern RBF solvers (hierarchical): * RMSError - root-mean-square error; NAN for old solvers (ML, QNN) * MaxError - maximum error; NAN for old solvers (ML, QNN) *************************************************************************/
class rbfreport { public: rbfreport(); rbfreport(const rbfreport &rhs); rbfreport& operator=(const rbfreport &rhs); virtual ~rbfreport(); double rmserror; double maxerror; ae_int_t arows; ae_int_t acols; ae_int_t annz; ae_int_t iterationscount; ae_int_t nmv; ae_int_t terminationtype; };
/************************************************************************* This function builds RBF model and returns report (contains some information which can be used for evaluation of the algorithm properties). Call to this function modifies RBF model by calculating its centers/radii/ weights and saving them into RBFModel structure. Initially RBFModel contain zero coefficients, but after call to this function we will have coefficients which were calculated in order to fit our dataset. After you called this function you can call RBFCalc(), RBFGridCalc() and other model calculation functions. INPUT PARAMETERS: S - RBF model, initialized by RBFCreate() call Rep - report: * Rep.TerminationType: * -5 - non-distinct basis function centers were detected, interpolation aborted; only QNN returns this error code, other algorithms can handle non- distinct nodes. * -4 - nonconvergence of the internal SVD solver * -3 incorrect model construction algorithm was chosen: QNN or RBF-ML, combined with one of the incompatible features: * NX=1 or NX>3 * points with per-dimension scales. * 1 - successful termination * 8 - a termination request was submitted via rbfrequesttermination() function. Fields which are set only by modern RBF solvers (hierarchical or nonnegative; older solvers like QNN and ML initialize these fields by NANs): * rep.rmserror - root-mean-square error at nodes * rep.maxerror - maximum error at nodes Fields are used for debugging purposes: * Rep.IterationsCount - iterations count of the LSQR solver * Rep.NMV - number of matrix-vector products * Rep.ARows - rows count for the system matrix * Rep.ACols - columns count for the system matrix * Rep.ANNZ - number of significantly non-zero elements (elements above some algorithm-determined threshold) NOTE: failure to build model will leave current state of the structure unchanged. -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfbuildmodel(rbfmodel &s, rbfreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This function calculates values of the RBF model at the given point. This is general function which can be used for arbitrary NX (dimension of the space of arguments) and NY (dimension of the function itself). However when you have NY=1 you may find more convenient to use rbfcalc2() or rbfcalc3(). IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftscalcbuf() with per-thread buffer object. This function returns 0.0 when model is not initialized. INPUT PARAMETERS: S - RBF model X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. OUTPUT PARAMETERS: Y - function value, array[NY]. Y is out-parameter and reallocated after call to this function. In case you want to reuse previously allocated Y, you may use RBFCalcBuf(), which reallocates Y only when it is too small. -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfcalc(rbfmodel &s, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function calculates values of the 1-dimensional RBF model with scalar output (NY=1) at the given point. IMPORTANT: this function works only with modern (hierarchical) RBFs. It can not be used with legacy (version 1) RBFs because older RBF code does not support 1-dimensional models. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftscalcbuf() with per-thread buffer object. This function returns 0.0 when: * the model is not initialized * NX<>1 * NY<>1 INPUT PARAMETERS: S - RBF model X0 - X-coordinate, finite number RESULT: value of the model or 0.0 (as defined above) -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
double rbfcalc1(rbfmodel &s, const double x0, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the 2-dimensional RBF model with scalar output (NY=1) at the given point. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftscalcbuf() with per-thread buffer object. This function returns 0.0 when: * model is not initialized * NX<>2 *NY<>1 INPUT PARAMETERS: S - RBF model X0 - first coordinate, finite number X1 - second coordinate, finite number RESULT: value of the model or 0.0 (as defined above) -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
double rbfcalc2(rbfmodel &s, const double x0, const double x1, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function calculates values of the 3-dimensional RBF model with scalar output (NY=1) at the given point. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftscalcbuf() with per-thread buffer object. This function returns 0.0 when: * model is not initialized * NX<>3 *NY<>1 INPUT PARAMETERS: S - RBF model X0 - first coordinate, finite number X1 - second coordinate, finite number X2 - third coordinate, finite number RESULT: value of the model or 0.0 (as defined above) -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
double rbfcalc3(rbfmodel &s, const double x0, const double x1, const double x2, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model at the given point. Same as rbfcalc(), but does not reallocate Y when in is large enough to store function values. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftscalcbuf() with per-thread buffer object. INPUT PARAMETERS: S - RBF model X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. Y - possibly preallocated array OUTPUT PARAMETERS: Y - function value, array[NY]. Y is not reallocated when it is larger than NY. -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfcalcbuf(rbfmodel &s, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates RBF model for a scalar (NY=1) or vector (NY>1) function in a NX-dimensional space (NX>=1). Newly created model is empty. It can be used for interpolation right after creation, but it just returns zeros. You have to add points to the model, tune interpolation settings, and then call model construction function rbfbuildmodel() which will update model according to your specification. USAGE: 1. User creates model with rbfcreate() 2. User adds dataset with rbfsetpoints() or rbfsetpointsandscales() 3. User selects RBF solver by calling: * rbfsetalgohierarchical() - for a HRBF solver, a hierarchical large- scale Gaussian RBFs (works well for uniformly distributed point clouds, but may fail when the data are non-uniform; use other solvers below in such cases) * rbfsetalgothinplatespline() - for a large-scale DDM-RBF solver with thin plate spline basis function being used * rbfsetalgobiharmonic() - for a large-scale DDM-RBF solver with biharmonic basis function being used * rbfsetalgomultiquadricauto() - for a large-scale DDM-RBF solver with multiquadric basis function being used (automatic selection of the scale parameter Alpha) * rbfsetalgomultiquadricmanual() - for a large-scale DDM-RBF solver with multiquadric basis function being used (manual selection of the scale parameter Alpha) 4. (OPTIONAL) User chooses polynomial term by calling: * rbflinterm() to set linear term (default) * rbfconstterm() to set constant term * rbfzeroterm() to set zero term 5. User calls rbfbuildmodel() function which rebuilds model according to the specification INPUT PARAMETERS: NX - dimension of the space, NX>=1 NY - function dimension, NY>=1 OUTPUT PARAMETERS: S - RBF model (initially equals to zero) NOTE 1: memory requirements. RBF models require amount of memory which is proportional to the number of data points. Some additional memory is allocated during model construction, but most of this memory is freed after the model coefficients are calculated. Amount of this additional memory depends on model construction algorithm being used. -- ALGLIB -- Copyright 13.12.2011, 20.06.2016 by Bochkanov Sergey *************************************************************************/
void rbfcreate(const ae_int_t nx, const ae_int_t ny, rbfmodel &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  [4]  

/************************************************************************* This function creates buffer structure which can be used to perform parallel RBF model evaluations (with one RBF model instance being used from multiple threads, as long as different threads use different instances of the buffer). This buffer object can be used with rbftscalcbuf() function (here "ts" stands for "thread-safe", "buf" is a suffix which denotes function which reuses previously allocated output space). A buffer creation function (this function) is also thread-safe. I.e. you may safely create multiple buffers for the same RBF model from multiple threads. NOTE: the buffer object is just a collection of several preallocated dynamic arrays and precomputed values. If you delete its "parent" RBF model when the buffer is still alive, nothing bad will happen (no dangling pointers or resource leaks). The buffer will simply become useless. How to use it: * create RBF model structure with rbfcreate() * load data, tune parameters * call rbfbuildmodel() * call rbfcreatecalcbuffer(), once per thread working with RBF model (you should call this function only AFTER call to rbfbuildmodel(), see below for more information) * call rbftscalcbuf() from different threads, with each thread working with its own copy of buffer object. * it is recommended to reuse buffer as much as possible because buffer creation involves allocation of several large dynamic arrays. It is a huge waste of resource to use it just once. INPUT PARAMETERS S - RBF model OUTPUT PARAMETERS Buf - external buffer. IMPORTANT: buffer object should be used only with RBF model object which was used to initialize buffer. Any attempt to use buffer with different object is dangerous - you may get memory violation error because sizes of internal arrays do not fit to dimensions of RBF structure. IMPORTANT: you should call this function only for model which was built with rbfbuildmodel() function, after successful invocation of rbfbuildmodel(). Sizes of some internal structures are determined only after model is built, so buffer object created before model construction stage will be useless (and any attempt to use it will result in exception). -- ALGLIB -- Copyright 02.04.2016 by Sergey Bochkanov *************************************************************************/
void rbfcreatecalcbuffer(const rbfmodel &s, rbfcalcbuffer &buf, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model and its derivatives at the given point. This is general function which can be used for arbitrary NX (dimension of the space of arguments) and NY (dimension of the function itself). However if you have NX=3 and NY=1, you may find more convenient to use rbfdiff3(). IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftsdiffbuf() with per-thread buffer object. This function returns 0.0 in Y and/or DY in the following cases: * the model is not initialized (Y=0, DY=0) * the gradient is undefined at the trial point. Some basis functions have discontinuous derivatives at the interpolation nodes: * biharmonic splines f=r have no Hessian and no gradient at the nodes In these cases only DY is set to zero (Y is still returned) INPUT PARAMETERS: S - RBF model X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. OUTPUT PARAMETERS: Y - function value, array[NY]. Y is out-parameter and reallocated after call to this function. In case you want to reuse previously allocated Y, you may use RBFDiffBuf(), which reallocates Y only when it is too small. DY - derivatives, array[NX*NY]: * Y[I*NX+J] with 0<=I<NY and 0<=J<NX stores derivative of function component I with respect to input J. * for NY=1 it is simply NX-dimensional gradient of the scalar NX-dimensional function DY is out-parameter and reallocated after call to this function. In case you want to reuse previously allocated DY, you may use RBFDiffBuf(), which reallocates DY only when it is too small to store the result. -- ALGLIB -- Copyright 13.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfdiff(rbfmodel &s, const real_1d_array &x, real_1d_array &y, real_1d_array &dy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates value and derivatives of the 1-dimensional RBF model with scalar output (NY=1) at the given point. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftscalcbuf() with per-thread buffer object. This function returns 0.0 in Y and/or DY in the following cases: * the model is not initialized (Y=0, DY=0) * NX<>1 or NY<>1 (Y=0, DY=0) * the gradient is undefined at the trial point. Some basis functions have discontinuous derivatives at the interpolation nodes: * biharmonic splines f=r have no Hessian and no gradient at the nodes In these cases only DY is set to zero (Y is still returned) INPUT PARAMETERS: S - RBF model X0 - first coordinate, finite number OUTPUT PARAMETERS: Y - value of the model or 0.0 (as defined above) DY0 - derivative with respect to X0 -- ALGLIB -- Copyright 13.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfdiff1(rbfmodel &s, const double x0, double &y, double &dy0, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates value and derivatives of the 2-dimensional RBF model with scalar output (NY=1) at the given point. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftscalcbuf() with per-thread buffer object. This function returns 0.0 in Y and/or DY in the following cases: * the model is not initialized (Y=0, DY=0) * NX<>2 or NY<>1 (Y=0, DY=0) * the gradient is undefined at the trial point. Some basis functions have discontinuous derivatives at the interpolation nodes: * biharmonic splines f=r have no Hessian and no gradient at the nodes In these cases only DY is set to zero (Y is still returned) INPUT PARAMETERS: S - RBF model X0 - first coordinate, finite number X1 - second coordinate, finite number OUTPUT PARAMETERS: Y - value of the model or 0.0 (as defined above) DY0 - derivative with respect to X0 DY1 - derivative with respect to X1 -- ALGLIB -- Copyright 13.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfdiff2(rbfmodel &s, const double x0, const double x1, double &y, double &dy0, double &dy1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates value and derivatives of the 3-dimensional RBF model with scalar output (NY=1) at the given point. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftscalcbuf() with per-thread buffer object. This function returns 0.0 in Y and/or DY in the following cases: * the model is not initialized (Y=0, DY=0) * NX<>3 or NY<>1 (Y=0, DY=0) * the gradient is undefined at the trial point. Some basis functions have discontinuous derivatives at the interpolation nodes: * biharmonic splines f=r have no Hessian and no gradient at the nodes In these cases only DY is set to zero (Y is still returned) INPUT PARAMETERS: S - RBF model X0 - first coordinate, finite number X1 - second coordinate, finite number X2 - third coordinate, finite number OUTPUT PARAMETERS: Y - value of the model or 0.0 (as defined above) DY0 - derivative with respect to X0 DY1 - derivative with respect to X1 DY2 - derivative with respect to X2 -- ALGLIB -- Copyright 13.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfdiff3(rbfmodel &s, const double x0, const double x1, const double x2, double &y, double &dy0, double &dy1, double &dy2, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model and its derivatives at the given point. It is a buffered version of the RBFDiff() which tries to reuse possibly preallocated output arrays Y/DY as much as possible. This is general function which can be used for arbitrary NX (dimension of the space of arguments) and NY (dimension of the function itself). However if you have NX=1, 2 or 3 and NY=1, you may find more convenient to use rbfdiff1(), rbfdiff2() or rbfdiff3(). IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftsdiffbuf() with per-thread buffer object. This function returns 0.0 in Y and/or DY in the following cases: * the model is not initialized (Y=0, DY=0) * the gradient is undefined at the trial point. Some basis functions have discontinuous derivatives at the interpolation nodes: * biharmonic splines f=r have no Hessian and no gradient at the nodes In these cases only DY is set to zero (Y is still returned) INPUT PARAMETERS: S - RBF model X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. Y, DY - possibly preallocated arrays; if array size is large enough to store results, this function does not reallocate array to fit output size exactly. OUTPUT PARAMETERS: Y - function value, array[NY]. DY - derivatives, array[NX*NY]: * Y[I*NX+J] with 0<=I<NY and 0<=J<NX stores derivative of function component I with respect to input J. * for NY=1 it is simply NX-dimensional gradient of the scalar NX-dimensional function -- ALGLIB -- Copyright 13.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfdiffbuf(rbfmodel &s, const real_1d_array &x, real_1d_array &y, real_1d_array &dy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model at the given point using a fast approximate algorithm whenever possible. If no fast algorithm is available for a given model type, traditional O(N) approach is used. Presently, fast evaluation is implemented only for biharmonic splines. The absolute approximation accuracy is controlled by the rbfsetfastevaltol() function. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftscalcbuf() with a per-thread buffer object. This function returns 0.0 when model is not initialized. INPUT PARAMETERS: S - RBF model X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. OUTPUT PARAMETERS: Y - function value, array[NY]. Y is out-parameter and reallocated after call to this function. In case you want to reuse previously allocated Y, you may use RBFCalcBuf(), which reallocates Y only when it is too small. -- ALGLIB -- Copyright 19.09.2022 by Bochkanov Sergey *************************************************************************/
void rbffastcalc(rbfmodel &s, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns model version. INPUT PARAMETERS: S - RBF model RESULT: * 1 - for models created by QNN and RBF-ML algorithms, compatible with ALGLIB 3.10 or earlier. * 2 - for models created by HierarchicalRBF, requires ALGLIB 3.11 or later -- ALGLIB -- Copyright 06.07.2016 by Bochkanov Sergey *************************************************************************/
ae_int_t rbfgetmodelversion(rbfmodel &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is legacy function for gridded calculation of RBF model. It is superseded by rbfgridcalc2v() and rbfgridcalc2vsubset() functions. -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfgridcalc2(rbfmodel &s, const real_1d_array &x0, const ae_int_t n0, const real_1d_array &x1, const ae_int_t n1, real_2d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model at the regular grid, which has N0*N1 points, with Point[I,J] = (X0[I], X1[J]). Vector-valued RBF models are supported. This function returns 0.0 when: * model is not initialized * NX<>2 ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. NOTE: Parallel processing is implemented only for modern (hierarchical) RBFs. Legacy version 1 RBFs (created by QNN or RBF-ML) are still processed serially. INPUT PARAMETERS: S - RBF model, used in read-only mode, can be shared between multiple invocations of this function from multiple threads. X0 - array of grid nodes, first coordinates, array[N0]. Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N0 - grid size (number of nodes) in the first dimension X1 - array of grid nodes, second coordinates, array[N1] Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N1 - grid size (number of nodes) in the second dimension OUTPUT PARAMETERS: Y - function values, array[NY*N0*N1], where NY is a number of "output" vector values (this function supports vector- valued RBF models). Y is out-variable and is reallocated by this function. Y[K+NY*(I0+I1*N0)]=F_k(X0[I0],X1[I1]), for: * K=0...NY-1 * I0=0...N0-1 * I1=0...N1-1 NOTE: this function supports weakly ordered grid nodes, i.e. you may have X[i]=X[i+1] for some i. It does not provide you any performance benefits due to duplication of points, just convenience and flexibility. NOTE: this function is re-entrant, i.e. you may use same rbfmodel structure in multiple threads calling this function for different grids. NOTE: if you need function values on some subset of regular grid, which may be described as "several compact and dense islands", you may use rbfgridcalc2vsubset(). -- ALGLIB -- Copyright 27.01.2017 by Bochkanov Sergey *************************************************************************/
void rbfgridcalc2v(const rbfmodel &s, const real_1d_array &x0, const ae_int_t n0, const real_1d_array &x1, const ae_int_t n1, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model at some subset of regular grid: * grid has N0*N1 points, with Point[I,J] = (X0[I], X1[J]) * only values at some subset of this grid are required Vector-valued RBF models are supported. This function returns 0.0 when: * model is not initialized * NX<>2 ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. NOTE: Parallel processing is implemented only for modern (hierarchical) RBFs. Legacy version 1 RBFs (created by QNN or RBF-ML) are still processed serially. INPUT PARAMETERS: S - RBF model, used in read-only mode, can be shared between multiple invocations of this function from multiple threads. X0 - array of grid nodes, first coordinates, array[N0]. Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N0 - grid size (number of nodes) in the first dimension X1 - array of grid nodes, second coordinates, array[N1] Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N1 - grid size (number of nodes) in the second dimension FlagY - array[N0*N1]: * Y[I0+I1*N0] corresponds to node (X0[I0],X1[I1]) * it is a "bitmap" array which contains False for nodes which are NOT calculated, and True for nodes which are required. OUTPUT PARAMETERS: Y - function values, array[NY*N0*N1*N2], where NY is a number of "output" vector values (this function supports vector- valued RBF models): * Y[K+NY*(I0+I1*N0)]=F_k(X0[I0],X1[I1]), for K=0...NY-1, I0=0...N0-1, I1=0...N1-1. * elements of Y[] which correspond to FlagY[]=True are loaded by model values (which may be exactly zero for some nodes). * elements of Y[] which correspond to FlagY[]=False MAY be initialized by zeros OR may be calculated. This function processes grid as a hierarchy of nested blocks and micro-rows. If just one element of micro-row is required, entire micro-row (up to 8 nodes in the current version, but no promises) is calculated. NOTE: this function supports weakly ordered grid nodes, i.e. you may have X[i]=X[i+1] for some i. It does not provide you any performance benefits due to duplication of points, just convenience and flexibility. NOTE: this function is re-entrant, i.e. you may use same rbfmodel structure in multiple threads calling this function for different grids. -- ALGLIB -- Copyright 04.03.2016 by Bochkanov Sergey *************************************************************************/
void rbfgridcalc2vsubset(const rbfmodel &s, const real_1d_array &x0, const ae_int_t n0, const real_1d_array &x1, const ae_int_t n1, const boolean_1d_array &flagy, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model at the regular grid, which has N0*N1*N2 points, with Point[I,J,K] = (X0[I], X1[J], X2[K]). Vector-valued RBF models are supported. This function returns 0.0 when: * model is not initialized * NX<>3 ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. NOTE: Parallel processing is implemented only for modern (hierarchical) RBFs. Legacy version 1 RBFs (created by QNN or RBF-ML) are still processed serially. INPUT PARAMETERS: S - RBF model, used in read-only mode, can be shared between multiple invocations of this function from multiple threads. X0 - array of grid nodes, first coordinates, array[N0]. Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N0 - grid size (number of nodes) in the first dimension X1 - array of grid nodes, second coordinates, array[N1] Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N1 - grid size (number of nodes) in the second dimension X2 - array of grid nodes, third coordinates, array[N2] Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N2 - grid size (number of nodes) in the third dimension OUTPUT PARAMETERS: Y - function values, array[NY*N0*N1*N2], where NY is a number of "output" vector values (this function supports vector- valued RBF models). Y is out-variable and is reallocated by this function. Y[K+NY*(I0+I1*N0+I2*N0*N1)]=F_k(X0[I0],X1[I1],X2[I2]), for: * K=0...NY-1 * I0=0...N0-1 * I1=0...N1-1 * I2=0...N2-1 NOTE: this function supports weakly ordered grid nodes, i.e. you may have X[i]=X[i+1] for some i. It does not provide you any performance benefits due to duplication of points, just convenience and flexibility. NOTE: this function is re-entrant, i.e. you may use same rbfmodel structure in multiple threads calling this function for different grids. NOTE: if you need function values on some subset of regular grid, which may be described as "several compact and dense islands", you may use rbfgridcalc3vsubset(). -- ALGLIB -- Copyright 04.03.2016 by Bochkanov Sergey *************************************************************************/
void rbfgridcalc3v(const rbfmodel &s, const real_1d_array &x0, const ae_int_t n0, const real_1d_array &x1, const ae_int_t n1, const real_1d_array &x2, const ae_int_t n2, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model at some subset of regular grid: * grid has N0*N1*N2 points, with Point[I,J,K] = (X0[I], X1[J], X2[K]) * only values at some subset of this grid are required Vector-valued RBF models are supported. This function returns 0.0 when: * model is not initialized * NX<>3 ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. NOTE: Parallel processing is implemented only for modern (hierarchical) RBFs. Legacy version 1 RBFs (created by QNN or RBF-ML) are still processed serially. INPUT PARAMETERS: S - RBF model, used in read-only mode, can be shared between multiple invocations of this function from multiple threads. X0 - array of grid nodes, first coordinates, array[N0]. Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N0 - grid size (number of nodes) in the first dimension X1 - array of grid nodes, second coordinates, array[N1] Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N1 - grid size (number of nodes) in the second dimension X2 - array of grid nodes, third coordinates, array[N2] Must be ordered by ascending. Exception is generated if the array is not correctly ordered. N2 - grid size (number of nodes) in the third dimension FlagY - array[N0*N1*N2]: * Y[I0+I1*N0+I2*N0*N1] corresponds to node (X0[I0],X1[I1],X2[I2]) * it is a "bitmap" array which contains False for nodes which are NOT calculated, and True for nodes which are required. OUTPUT PARAMETERS: Y - function values, array[NY*N0*N1*N2], where NY is a number of "output" vector values (this function supports vector- valued RBF models): * Y[K+NY*(I0+I1*N0+I2*N0*N1)]=F_k(X0[I0],X1[I1],X2[I2]), for K=0...NY-1, I0=0...N0-1, I1=0...N1-1, I2=0...N2-1. * elements of Y[] which correspond to FlagY[]=True are loaded by model values (which may be exactly zero for some nodes). * elements of Y[] which correspond to FlagY[]=False MAY be initialized by zeros OR may be calculated. This function processes grid as a hierarchy of nested blocks and micro-rows. If just one element of micro-row is required, entire micro-row (up to 8 nodes in the current version, but no promises) is calculated. NOTE: this function supports weakly ordered grid nodes, i.e. you may have X[i]=X[i+1] for some i. It does not provide you any performance benefits due to duplication of points, just convenience and flexibility. NOTE: this function is re-entrant, i.e. you may use same rbfmodel structure in multiple threads calling this function for different grids. -- ALGLIB -- Copyright 04.03.2016 by Bochkanov Sergey *************************************************************************/
void rbfgridcalc3vsubset(const rbfmodel &s, const real_1d_array &x0, const ae_int_t n0, const real_1d_array &x1, const ae_int_t n1, const real_1d_array &x2, const ae_int_t n2, const boolean_1d_array &flagy, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model and its first and second derivatives (Hessian matrix) at the given point. This function supports both scalar (NY=1) and vector-valued (NY>1) RBFs. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftshessbuf() with per-thread buffer object. This function returns 0 in Y and/or DY and/or D2Y in the following cases: * the model is not initialized (Y=0, DY=0, D2Y=0) * the gradient and/or Hessian is undefined at the trial point. Some basis functions have discontinuous derivatives at the interpolation nodes: * thin plate splines have no Hessian at the nodes * biharmonic splines f=r have no Hessian and no gradient at the nodes In these cases only corresponding derivative is set to zero, and the rest of the derivatives is still returned. INPUT PARAMETERS: S - RBF model X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. OUTPUT PARAMETERS: Y - function value, array[NY]. Y is out-parameter and reallocated after call to this function. In case you want to reuse previously allocated Y, you may use RBFHessBuf(), which reallocates Y only when it is too small. DY - first derivatives, array[NY*NX]: * Y[I*NX+J] with 0<=I<NY and 0<=J<NX stores derivative of function component I with respect to input J. * for NY=1 it is simply NX-dimensional gradient of the scalar NX-dimensional function DY is out-parameter and reallocated after call to this function. In case you want to reuse previously allocated DY, you may use RBFHessBuf(), which reallocates DY only when it is too small to store the result. D2Y - second derivatives, array[NY*NX*NX]: * for NY=1 it is NX*NX array that stores Hessian matrix, with Y[I*NX+J]=Y[J*NX+I]. * for a vector-valued RBF with NY>1 it contains NY subsequently stored Hessians: an element Y[K*NX*NX+I*NX+J] with 0<=K<NY, 0<=I<NX and 0<=J<NX stores second derivative of the function #K with respect to inputs #I and #J. D2Y is out-parameter and reallocated after call to this function. In case you want to reuse previously allocated D2Y, you may use RBFHessBuf(), which reallocates D2Y only when it is too small to store the result. -- ALGLIB -- Copyright 13.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfhess(rbfmodel &s, const real_1d_array &x, real_1d_array &y, real_1d_array &dy, real_1d_array &d2y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model and its first and second derivatives (Hessian matrix) at the given point. It is a buffered version that reuses memory allocated in output buffers Y/DY/D2Y as much as possible. This function supports both scalar (NY=1) and vector-valued (NY>1) RBFs. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. It uses fields of rbfmodel as temporary arrays, i.e. it is impossible to perform parallel evaluation on the same rbfmodel object (parallel calls of this function for independent rbfmodel objects are safe). If you want to perform parallel model evaluation from multiple threads, use rbftshessbuf() with per-thread buffer object. This function returns 0 in Y and/or DY and/or D2Y in the following cases: * the model is not initialized (Y=0, DY=0, D2Y=0) * the gradient and/or Hessian is undefined at the trial point. Some basis functions have discontinuous derivatives at the interpolation nodes: * thin plate splines have no Hessian at the nodes * biharmonic splines f=r have no Hessian and no gradient at the nodes In these cases only corresponding derivative is set to zero, and the rest of the derivatives is still returned. INPUT PARAMETERS: S - RBF model X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. Y,DY,D2Y- possible preallocated output arrays. If these arrays are smaller than required to store the result, they are automatically reallocated. If array is large enough, it is not resized. OUTPUT PARAMETERS: Y - function value, array[NY]. DY - first derivatives, array[NY*NX]: * Y[I*NX+J] with 0<=I<NY and 0<=J<NX stores derivative of function component I with respect to input J. * for NY=1 it is simply NX-dimensional gradient of the scalar NX-dimensional function D2Y - second derivatives, array[NY*NX*NX]: * for NY=1 it is NX*NX array that stores Hessian matrix, with Y[I*NX+J]=Y[J*NX+I]. * for a vector-valued RBF with NY>1 it contains NY subsequently stored Hessians: an element Y[K*NX*NX+I*NX+J] with 0<=K<NY, 0<=I<NX and 0<=J<NX stores second derivative of the function #K with respect to inputs #I and #J. -- ALGLIB -- Copyright 13.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfhessbuf(rbfmodel &s, const real_1d_array &x, real_1d_array &y, real_1d_array &dy, real_1d_array &d2y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to peek into hierarchical RBF construction process from some other thread and get current progress indicator. It returns value in [0,1]. IMPORTANT: only HRBFs (hierarchical RBFs) support peeking into progress indicator. Legacy RBF-ML and RBF-QNN do not support it. You will always get 0 value. INPUT PARAMETERS: S - RBF model object RESULT: progress value, in [0,1] -- ALGLIB -- Copyright 17.11.2018 by Bochkanov Sergey *************************************************************************/
double rbfpeekprogress(const rbfmodel &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to submit a request for termination of the hierarchical RBF construction process from some other thread. As result, RBF construction is terminated smoothly (with proper deallocation of all necessary resources) and resultant model is filled by zeros. A rep.terminationtype=8 will be returned upon receiving such request. IMPORTANT: only HRBFs (hierarchical RBFs) support termination requests. Legacy RBF-ML and RBF-QNN do not support it. An attempt to terminate their construction will be ignored. IMPORTANT: termination request flag is cleared when the model construction starts. Thus, any pre-construction termination requests will be silently ignored - only ones submitted AFTER construction has actually began will be handled. INPUT PARAMETERS: S - RBF model object -- ALGLIB -- Copyright 17.11.2018 by Bochkanov Sergey *************************************************************************/
void rbfrequesttermination(rbfmodel &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void rbfserialize(const rbfmodel &obj, std::string &s_out); void rbfserialize(const rbfmodel &obj, std::ostream &s_out);
/************************************************************************* This function chooses a biharmonic DDM-RBF solver, a fast RBF solver with f(r)=r as a basis function. This algorithm has following important features: * no tunable parameters * C0 continuous RBF model (the model has discontinuous derivatives at the interpolation nodes) * fast model construction algorithm with O(N) memory and O(N*logN) running time requirements. Hundreds of thousands of points can be handled with this algorithm. * accelerated evaluation using far field expansions (aka fast multipoles method) is supported. See rbffastcalc() for more information. * controllable smoothing via optional nonlinearity penalty INPUT PARAMETERS: S - RBF model, initialized by rbfcreate() call LambdaV - smoothing parameter, LambdaV>=0, defaults to 0.0: * LambdaV=0 means that no smoothing is applied, i.e. the spline tries to pass through all dataset points exactly * LambdaV>0 means that a multiquadric spline is built with larger LambdaV corresponding to models with less nonlinearities. Smoothing spline reproduces target values at nodes with small error; from the other side, it is much more stable. Recommended values: * 1.0E-6 for minimal stability improving smoothing * 1.0E-3 a good value to start experiments; first results are visible * 1.0 for strong smoothing IMPORTANT: this model construction algorithm was introduced in ALGLIB 3.19 and produces models which are INCOMPATIBLE with previous versions of ALGLIB. You can not unserialize models produced with this function in ALGLIB 3.18 or earlier. NOTE: polyharmonic RBFs, including thin plate splines, are somewhat slower than compactly supported RBFs built with HRBF algorithm due to the fact that non-compact basis function does not vanish far away from the nodes. From the other side, polyharmonic RBFs often produce much better results than HRBFs. NOTE: this algorithm supports specification of per-dimensional radii via scale vector, which is set by means of rbfsetpointsandscales() function. This feature is useful if you solve spatio-temporal interpolation problems where different radii are required for spatial and temporal dimensions. -- ALGLIB -- Copyright 12.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfsetalgobiharmonic(rbfmodel &s, const double lambdav, const xparams _xparams = alglib::xdefault); void rbfsetalgobiharmonic(rbfmodel &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function chooses HRBF solver, a 2nd version of ALGLIB RBFs. This algorithm is called Hierarchical RBF. It similar to its previous incarnation, RBF-ML, i.e. it also builds a sequence of models with decreasing radii. However, it uses more economical way of building upper layers (ones with large radii), which results in faster model construction and evaluation, as well as smaller memory footprint during construction. This algorithm has following important features: * ability to handle millions of points * controllable smoothing via nonlinearity penalization * support for specification of per-dimensional radii via scale vector, which is set by means of rbfsetpointsandscales() function. This feature is useful if you solve spatio-temporal interpolation problems, where different radii are required for spatial and temporal dimensions. Running times are roughly proportional to: * N*log(N)*NLayers - for the model construction * N*NLayers - for the model evaluation You may see that running time does not depend on search radius or points density, just on the number of layers in the hierarchy. INPUT PARAMETERS: S - RBF model, initialized by rbfcreate() call RBase - RBase parameter, RBase>0 NLayers - NLayers parameter, NLayers>0, recommended value to start with - about 5. LambdaNS- >=0, nonlinearity penalty coefficient, negative values are not allowed. This parameter adds controllable smoothing to the problem, which may reduce noise. Specification of non- zero lambda means that in addition to fitting error solver will also minimize LambdaNS*|S''(x)|^2 (appropriately generalized to multiple dimensions. Specification of exactly zero value means that no penalty is added (we do not even evaluate matrix of second derivatives which is necessary for smoothing). Calculation of nonlinearity penalty is costly - it results in several-fold increase of model construction time. Evaluation time remains the same. Optimal lambda is problem-dependent and requires trial and error. Good value to start from is 1e-5...1e-6, which corresponds to slightly noticeable smoothing of the function. Value 1e-2 usually means that quite heavy smoothing is applied. TUNING ALGORITHM In order to use this algorithm you have to choose three parameters: * initial radius RBase * number of layers in the model NLayers * penalty coefficient LambdaNS Initial radius is easy to choose - you can pick any number several times larger than the average distance between points. Algorithm won't break down if you choose radius which is too large (model construction time will increase, but model will be built correctly). Choose such number of layers that RLast=RBase/2^(NLayers-1) (radius used by the last layer) will be smaller than the typical distance between points. In case model error is too large, you can increase number of layers. Having more layers will make model construction and evaluation proportionally slower, but it will allow you to have model which precisely fits your data. From the other side, if you want to suppress noise, you can DECREASE number of layers to make your model less flexible (or specify non-zero LambdaNS). TYPICAL ERRORS 1. Using too small number of layers - RBF models with large radius are not flexible enough to reproduce small variations in the target function. You need many layers with different radii, from large to small, in order to have good model. 2. Using initial radius which is too small. You will get model with "holes" in the areas which are too far away from interpolation centers. However, algorithm will work correctly (and quickly) in this case. -- ALGLIB -- Copyright 20.06.2016 by Bochkanov Sergey *************************************************************************/
void rbfsetalgohierarchical(rbfmodel &s, const double rbase, const ae_int_t nlayers, const double lambdans, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* DEPRECATED: this function is deprecated. ALGLIB includes new RBF model construction algorithms: DDM-RBF (since version 3.19) and HRBF (since version 3.11). -- ALGLIB -- Copyright 02.03.2012 by Bochkanov Sergey *************************************************************************/
void rbfsetalgomultilayer(rbfmodel &s, const double rbase, const ae_int_t nlayers, const double lambdav, const xparams _xparams = alglib::xdefault); void rbfsetalgomultilayer(rbfmodel &s, const double rbase, const ae_int_t nlayers, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function chooses a multiquadric DDM-RBF solver, a fast RBF solver with f(r)=sqrt(r^2+Alpha^2) as a basis function, with Alpha being automatically determined. This algorithm has following important features: * easy setup - no need to tune Alpha, good value is automatically assigned * C2 continuous RBF model * fast model construction algorithm with O(N) memory and O(N^2) running time requirements. Hundreds of thousands of points can be handled with this algorithm. * controllable smoothing via optional nonlinearity penalty This algorithm automatically selects Alpha as a mean distance to the nearest neighbor (ignoring neighbors that are too close). INPUT PARAMETERS: S - RBF model, initialized by rbfcreate() call LambdaV - smoothing parameter, LambdaV>=0, defaults to 0.0: * LambdaV=0 means that no smoothing is applied, i.e. the spline tries to pass through all dataset points exactly * LambdaV>0 means that a multiquadric spline is built with larger LambdaV corresponding to models with less nonlinearities. Smoothing spline reproduces target values at nodes with small error; from the other side, it is much more stable. Recommended values: * 1.0E-6 for minimal stability improving smoothing * 1.0E-3 a good value to start experiments; first results are visible * 1.0 for strong smoothing IMPORTANT: this model construction algorithm was introduced in ALGLIB 3.19 and produces models which are INCOMPATIBLE with previous versions of ALGLIB. You can not unserialize models produced with this function in ALGLIB 3.18 or earlier. NOTE: polyharmonic RBFs, including thin plate splines, are somewhat slower than compactly supported RBFs built with HRBF algorithm due to the fact that non-compact basis function does not vanish far away from the nodes. From the other side, polyharmonic RBFs often produce much better results than HRBFs. NOTE: this algorithm supports specification of per-dimensional radii via scale vector, which is set by means of rbfsetpointsandscales() function. This feature is useful if you solve spatio-temporal interpolation problems where different radii are required for spatial and temporal dimensions. -- ALGLIB -- Copyright 12.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfsetalgomultiquadricauto(rbfmodel &s, const double lambdav, const xparams _xparams = alglib::xdefault); void rbfsetalgomultiquadricauto(rbfmodel &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function chooses a multiquadric DDM-RBF solver, a fast RBF solver with f(r)=sqrt(r^2+Alpha^2) as a basis function, with manual choice of the scale parameter Alpha. This algorithm has following important features: * C2 continuous RBF model (when Alpha>0 is used; for Alpha=0 the model is merely C0 continuous) * fast model construction algorithm with O(N) memory and O(N^2) running time requirements. Hundreds of thousands of points can be handled with this algorithm. * controllable smoothing via optional nonlinearity penalty One important point is that this algorithm includes tunable parameter Alpha, which should be carefully chosen. Selecting too large value will result in extremely badly conditioned problems (interpolation accuracy may degrade up to complete breakdown) whilst selecting too small value may produce models that are precise but nearly nonsmooth at the nodes. Good value to start from is mean distance between nodes. Generally, choosing too small Alpha is better than choosing too large - in the former case you still have model that reproduces target values at the nodes. In most cases, better option is to choose good Alpha automatically - it is done by another version of the same algorithm that is activated by calling rbfsetalgomultiquadricauto() method. INPUT PARAMETERS: S - RBF model, initialized by rbfcreate() call Alpha - basis function parameter, Alpha>=0: * Alpha>0 means that multiquadric algorithm is used which produces C2-continuous RBF model * Alpha=0 means that the multiquadric kernel effectively becomes a biharmonic one: f=r. As a result, the model becomes nonsmooth at nodes, and hence is C0 continuous LambdaV - smoothing parameter, LambdaV>=0, defaults to 0.0: * LambdaV=0 means that no smoothing is applied, i.e. the spline tries to pass through all dataset points exactly * LambdaV>0 means that a multiquadric spline is built with larger LambdaV corresponding to models with less nonlinearities. Smoothing spline reproduces target values at nodes with small error; from the other side, it is much more stable. Recommended values: * 1.0E-6 for minimal stability improving smoothing * 1.0E-3 a good value to start experiments; first results are visible * 1.0 for strong smoothing IMPORTANT: this model construction algorithm was introduced in ALGLIB 3.19 and produces models which are INCOMPATIBLE with previous versions of ALGLIB. You can not unserialize models produced with this function in ALGLIB 3.18 or earlier. NOTE: polyharmonic RBFs, including thin plate splines, are somewhat slower than compactly supported RBFs built with HRBF algorithm due to the fact that non-compact basis function does not vanish far away from the nodes. From the other side, polyharmonic RBFs often produce much better results than HRBFs. NOTE: this algorithm supports specification of per-dimensional radii via scale vector, which is set by means of rbfsetpointsandscales() function. This feature is useful if you solve spatio-temporal interpolation problems where different radii are required for spatial and temporal dimensions. -- ALGLIB -- Copyright 12.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfsetalgomultiquadricmanual(rbfmodel &s, const double alpha, const double lambdav, const xparams _xparams = alglib::xdefault); void rbfsetalgomultiquadricmanual(rbfmodel &s, const double alpha, const xparams _xparams = alglib::xdefault);
/************************************************************************* DEPRECATED: this function is deprecated. ALGLIB includes new RBF model construction algorithms: DDM-RBF (since version 3.19) and HRBF (since version 3.11). -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfsetalgoqnn(rbfmodel &s, const double q, const double z, const xparams _xparams = alglib::xdefault); void rbfsetalgoqnn(rbfmodel &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function chooses a thin plate spline DDM-RBF solver, a fast RBF solver with f(r)=r^2*ln(r) basis function. This algorithm has following important features: * easy setup - no tunable parameters * C1 continuous RBF model (gradient is defined everywhere, but Hessian is undefined at nodes), high-quality interpolation * fast model construction algorithm with O(N) memory and O(N^2) running time requirements. Hundreds of thousands of points can be handled with this algorithm. * controllable smoothing via optional nonlinearity penalty INPUT PARAMETERS: S - RBF model, initialized by rbfcreate() call LambdaV - smoothing parameter, LambdaV>=0, defaults to 0.0: * LambdaV=0 means that no smoothing is applied, i.e. the spline tries to pass through all dataset points exactly * LambdaV>0 means that a smoothing thin plate spline is built, with larger LambdaV corresponding to models with less nonlinearities. Smoothing spline reproduces target values at nodes with small error; from the other side, it is much more stable. Recommended values: * 1.0E-6 for minimal stability improving smoothing * 1.0E-3 a good value to start experiments; first results are visible * 1.0 for strong smoothing IMPORTANT: this model construction algorithm was introduced in ALGLIB 3.19 and produces models which are INCOMPATIBLE with previous versions of ALGLIB. You can not unserialize models produced with this function in ALGLIB 3.18 or earlier. NOTE: polyharmonic RBFs, including thin plate splines, are somewhat slower than compactly supported RBFs built with HRBF algorithm due to the fact that non-compact basis function does not vanish far away from the nodes. From the other side, polyharmonic RBFs often produce much better results than HRBFs. NOTE: this algorithm supports specification of per-dimensional radii via scale vector, which is set by means of rbfsetpointsandscales() function. This feature is useful if you solve spatio-temporal interpolation problems where different radii are required for spatial and temporal dimensions. -- ALGLIB -- Copyright 12.12.2021 by Bochkanov Sergey *************************************************************************/
void rbfsetalgothinplatespline(rbfmodel &s, const double lambdav, const xparams _xparams = alglib::xdefault); void rbfsetalgothinplatespline(rbfmodel &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets constant term (model is a sum of radial basis functions plus constant). This function won't have effect until next call to RBFBuildModel(). IMPORTANT: thin plate splines require polynomial term to be linear, not constant, in order to provide interpolation guarantees. Although failures are exceptionally rare, some small toy problems may result in degenerate linear systems. Thus, it is advised to use linear term when one fits data with TPS. INPUT PARAMETERS: S - RBF model, initialized by RBFCreate() call -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfsetconstterm(rbfmodel &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets absolute accuracy of a fast evaluation algorithm used by rbffastcalc() and other fast evaluation functions. A fast evaluation algorithm is model-dependent and is available only for some RBF models. Usually it utilizes far field expansions (a generalization of the fast multipoles method). If no approximate fast evaluator is available for the current RBF model type, this function has no effect. NOTE: this function can be called before or after the model was built. The result will be the same. NOTE: this function has O(N) running time, where N is a points count. Most fast evaluators work by aggregating influence of point groups, i.e. by computing so called far field. Changing evaluator tolerance means that far field radii have to be recomputed for each point cluster, and we have O(N) such clusters. This function is still very fast, but it should not be called too often, e.g. every time you call rbffastcalc() in a loop. NOTE: the tolerance set by this function is an accuracy of an evaluator which computes the value of the model. It is NOT accuracy of the model itself. E.g., if you set evaluation accuracy to 1E-12, the model value will be computed with required precision. However, the model itself is an approximation of the target (the default requirement is to fit model with ~6 digits of precision) and THIS accuracy can not be changed after the model was built. IMPORTANT: THIS FUNCTION IS THREAD-UNSAFE. Calling it while another thread tries to use rbffastcalc() is unsafe because it means that the accuracy requirements will change in the middle of computations. The algorithm may behave unpredictably. INPUT PARAMETERS: S - RBF model TOL - TOL>0, desired evaluation tolerance: * should be somewhere between 1E-3 and 1E-6 * values outside of this range will cause no problems (the evaluator will do the job anyway). However, too strict precision requirements may mean that no approximation speed-up will be achieved. -- ALGLIB -- Copyright 19.09.2022 by Bochkanov Sergey *************************************************************************/
void rbfsetfastevaltol(rbfmodel &s, const double tol, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets linear term (model is a sum of radial basis functions plus linear polynomial). This function won't have effect until next call to RBFBuildModel(). Using linear term is a default option and it is the best one - it provides best convergence guarantees for all RBF model types: legacy RBF-QNN and RBF-ML, Gaussian HRBFs and all types of DDM-RBF models. Other options, like constant or zero term, work for HRBFs, almost always work for DDM-RBFs but provide no stability guarantees in the latter case (e.g. the solver may fail on some carefully prepared problems). INPUT PARAMETERS: S - RBF model, initialized by RBFCreate() call -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfsetlinterm(rbfmodel &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function adds dataset. This function overrides results of the previous calls, i.e. multiple calls of this function will result in only the last set being added. IMPORTANT: ALGLIB version 3.11 and later allows you to specify a set of per-dimension scales. Interpolation radii are multiplied by the scale vector. It may be useful if you have mixed spatio-temporal data (say, a set of 3D slices recorded at different times). You should call rbfsetpointsandscales() function to use this feature. INPUT PARAMETERS: S - RBF model, initialized by rbfcreate() call. XY - points, array[N,NX+NY]. One row corresponds to one point in the dataset. First NX elements are coordinates, next NY elements are function values. Array may be larger than specified, in this case only leading [N,NX+NY] elements will be used. N - number of points in the dataset After you've added dataset and (optionally) tuned algorithm settings you should call rbfbuildmodel() in order to build a model for you. NOTE: dataset added by this function is not saved during model serialization. MODEL ITSELF is serialized, but data used to build it are not. So, if you 1) add dataset to empty RBF model, 2) serialize and unserialize it, then you will get an empty RBF model with no dataset being attached. From the other side, if you call rbfbuildmodel() between (1) and (2), then after (2) you will get your fully constructed RBF model - but again with no dataset attached, so subsequent calls to rbfbuildmodel() will produce empty model. -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfsetpoints(rbfmodel &s, const real_2d_array &xy, const ae_int_t n, const xparams _xparams = alglib::xdefault); void rbfsetpoints(rbfmodel &s, const real_2d_array &xy, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This function adds dataset and a vector of per-dimension scales. It may be useful if you have mixed spatio-temporal data - say, a set of 3D slices recorded at different times. Such data typically require different RBF radii for spatial and temporal dimensions. ALGLIB solves this problem by specifying single RBF radius, which is (optionally) multiplied by the scale vector. This function overrides results of the previous calls, i.e. multiple calls of this function will result in only the last set being added. IMPORTANT: only modern RBF algorithms support variable scaling. Legacy algorithms like RBF-ML or QNN algorithms will result in -3 completion code being returned (incorrect algorithm). INPUT PARAMETERS: R - RBF model, initialized by rbfcreate() call. XY - points, array[N,NX+NY]. One row corresponds to one point in the dataset. First NX elements are coordinates, next NY elements are function values. Array may be larger than specified, in this case only leading [N,NX+NY] elements will be used. N - number of points in the dataset S - array[NX], scale vector, S[i]>0. After you've added dataset and (optionally) tuned algorithm settings you should call rbfbuildmodel() in order to build a model for you. NOTE: dataset added by this function is not saved during model serialization. MODEL ITSELF is serialized, but data used to build it are not. So, if you 1) add dataset to empty RBF model, 2) serialize and unserialize it, then you will get an empty RBF model with no dataset being attached. From the other side, if you call rbfbuildmodel() between (1) and (2), then after (2) you will get your fully constructed RBF model - but again with no dataset attached, so subsequent calls to rbfbuildmodel() will produce empty model. -- ALGLIB -- Copyright 20.06.2016 by Bochkanov Sergey *************************************************************************/
void rbfsetpointsandscales(rbfmodel &r, const real_2d_array &xy, const ae_int_t n, const real_1d_array &s, const xparams _xparams = alglib::xdefault); void rbfsetpointsandscales(rbfmodel &r, const real_2d_array &xy, const real_1d_array &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets basis function type, which can be: * 0 for classic Gaussian * 1 for fast and compact bell-like basis function, which becomes exactly zero at distance equal to 3*R (default option). INPUT PARAMETERS: S - RBF model, initialized by RBFCreate() call BF - basis function type: * 0 - classic Gaussian * 1 - fast and compact one -- ALGLIB -- Copyright 01.02.2017 by Bochkanov Sergey *************************************************************************/
void rbfsetv2bf(rbfmodel &s, const ae_int_t bf, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets stopping criteria of the underlying linear solver for hierarchical (version 2) RBF constructor. INPUT PARAMETERS: S - RBF model, initialized by RBFCreate() call MaxIts - this criterion will stop algorithm after MaxIts iterations. Typically a few hundreds iterations is required, with 400 being a good default value to start experimentation. Zero value means that default value will be selected. -- ALGLIB -- Copyright 01.02.2017 by Bochkanov Sergey *************************************************************************/
void rbfsetv2its(rbfmodel &s, const ae_int_t maxits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets support radius parameter of hierarchical (version 2) RBF constructor. Hierarchical RBF model achieves great speed-up by removing from the model excessive (too dense) nodes. Say, if you have RBF radius equal to 1 meter, and two nodes are just 1 millimeter apart, you may remove one of them without reducing model quality. Support radius parameter is used to justify which points need removal, and which do not. If two points are less than SUPPORT_R*CUR_RADIUS units of distance apart, one of them is removed from the model. The larger support radius is, the faster model construction AND evaluation are. However, too large values result in "bumpy" models. INPUT PARAMETERS: S - RBF model, initialized by RBFCreate() call R - support radius coefficient, >=0. Recommended values are [0.1,0.4] range, with 0.1 being default value. -- ALGLIB -- Copyright 01.02.2017 by Bochkanov Sergey *************************************************************************/
void rbfsetv2supportr(rbfmodel &s, const double r, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets desired accuracy for a version 3 RBF model. As of ALGLIB 3.20.0, version 3 models include biharmonic RBFs, thin plate splines, multiquadrics. Version 3 models are fit with specialized domain decomposition method which splits problem into smaller chunks. Models with size less than the DDM chunk size are computed nearly exactly in one step. Larger models are built with an iterative linear solver. This function controls accuracy of the solver. INPUT PARAMETERS: S - RBF model, initialized by RBFCreate() call TOL - desired precision: * must be non-negative * should be somewhere between 0.001 and 0.000001 * values higher than 0.001 make little sense - you may lose a lot of precision with no performance gains. * values below 1E-6 usually require too much time to converge, so they are silenly replaced by a 1E-6 cutoff value. Thus, zero can be used to denote 'maximum precision'. -- ALGLIB -- Copyright 01.10.2022 by Bochkanov Sergey *************************************************************************/
void rbfsetv3tol(rbfmodel &s, const double tol, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets zero term (model is a sum of radial basis functions without polynomial term). This function won't have effect until next call to RBFBuildModel(). IMPORTANT: only Gaussian RBFs (HRBF algorithm) provide interpolation guarantees when no polynomial term is used. Most other RBFs, including biharmonic splines, thin plate splines and multiquadrics, require at least constant term (biharmonic and multiquadric) or linear one (thin plate splines) in order to guarantee non-degeneracy of linear systems being solved. Although failures are exceptionally rare, some small toy problems still may result in degenerate linear systems. Thus,it is advised to use constant/linear term, unless one is 100% sure that he needs zero term. INPUT PARAMETERS: S - RBF model, initialized by RBFCreate() call -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfsetzeroterm(rbfmodel &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function calculates values of the RBF model at the given point, using external buffer object (internal temporaries of RBF model are not modified). This function allows to use same RBF model object in different threads, assuming that different threads use different instances of buffer structure. INPUT PARAMETERS: S - RBF model, may be shared between different threads Buf - buffer object created for this particular instance of RBF model with rbfcreatecalcbuffer(). X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. Y - possibly preallocated array OUTPUT PARAMETERS: Y - function value, array[NY]. Y is not reallocated when it is larger than NY. -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbftscalcbuf(const rbfmodel &s, rbfcalcbuffer &buf, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model and its derivatives at the given point, using external buffer object (internal temporaries of the RBF model are not modified). This function allows to use same RBF model object in different threads, assuming that different threads use different instances of the buffer structure. This function returns 0.0 in Y and/or DY in the following cases: * the model is not initialized (Y=0, DY=0) * the gradient is undefined at the trial point. Some basis functions have discontinuous derivatives at the interpolation nodes: * biharmonic splines f=r have no Hessian and no gradient at the nodes In these cases only DY is set to zero (Y is still returned) INPUT PARAMETERS: S - RBF model, may be shared between different threads Buf - buffer object created for this particular instance of RBF model with rbfcreatecalcbuffer(). X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. Y, DY - possibly preallocated arrays; if array size is large enough to store results, this function does not reallocate array to fit output size exactly. OUTPUT PARAMETERS: Y - function value, array[NY]. DY - derivatives, array[NX*NY]: * Y[I*NX+J] with 0<=I<NY and 0<=J<NX stores derivative of function component I with respect to input J. * for NY=1 it is simply NX-dimensional gradient of the scalar NX-dimensional function Zero is returned when the first derivative is undefined. -- ALGLIB -- Copyright 13.12.2021 by Bochkanov Sergey *************************************************************************/
void rbftsdiffbuf(const rbfmodel &s, rbfcalcbuffer &buf, const real_1d_array &x, real_1d_array &y, real_1d_array &dy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates values of the RBF model and its first and second derivatives (Hessian matrix) at the given point, using external buffer object (internal temporaries of the RBF model are not modified). This function allows to use same RBF model object in different threads, assuming that different threads use different instances of the buffer structure. This function returns 0 in Y and/or DY and/or D2Y in the following cases: * the model is not initialized (Y=0, DY=0, D2Y=0) * the gradient and/or Hessian is undefined at the trial point. Some basis functions have discontinuous derivatives at the interpolation nodes: * thin plate splines have no Hessian at the nodes * biharmonic splines f=r have no Hessian and no gradient at the nodes In these cases only corresponding derivative is set to zero, and the rest of the derivatives is still returned. INPUT PARAMETERS: S - RBF model, may be shared between different threads Buf - buffer object created for this particular instance of RBF model with rbfcreatecalcbuffer(). X - coordinates, array[NX]. X may have more than NX elements, in this case only leading NX will be used. Y,DY,D2Y- possible preallocated output arrays. If these arrays are smaller than required to store the result, they are automatically reallocated. If array is large enough, it is not resized. OUTPUT PARAMETERS: Y - function value, array[NY]. DY - first derivatives, array[NY*NX]: * Y[I*NX+J] with 0<=I<NY and 0<=J<NX stores derivative of function component I with respect to input J. * for NY=1 it is simply NX-dimensional gradient of the scalar NX-dimensional function Zero is returned when the first derivative is undefined. D2Y - second derivatives, array[NY*NX*NX]: * for NY=1 it is NX*NX array that stores Hessian matrix, with Y[I*NX+J]=Y[J*NX+I]. * for a vector-valued RBF with NY>1 it contains NY subsequently stored Hessians: an element Y[K*NX*NX+I*NX+J] with 0<=K<NY, 0<=I<NX and 0<=J<NX stores second derivative of the function #K with respect to inputs #I and #J. Zero is returned when the second derivative is undefined. -- ALGLIB -- Copyright 13.12.2021 by Bochkanov Sergey *************************************************************************/
void rbftshessbuf(const rbfmodel &s, rbfcalcbuffer &buf, const real_1d_array &x, real_1d_array &y, real_1d_array &dy, real_1d_array &d2y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function "unpacks" RBF model by extracting its coefficients. INPUT PARAMETERS: S - RBF model OUTPUT PARAMETERS: NX - dimensionality of argument NY - dimensionality of the target function XWR - model information , 2D array. One row of the array corresponds to one basis function. For ModelVersion=1 we have NX+NY+1 columns: * first NX columns - coordinates of the center * next NY columns - weights, one per dimension of the function being modeled * last column - radius, same for all dimensions of the function being modeled For ModelVersion=2 we have NX+NY+NX columns: * first NX columns - coordinates of the center * next NY columns - weights, one per dimension of the function being modeled * last NX columns - radii, one per dimension For ModelVersion=3 we have NX+NY+NX+3 columns: * first NX columns - coordinates of the center * next NY columns - weights, one per dimension of the function being modeled * next NX columns - radii, one per dimension * next column - basis function type: * 1 for f=r * 2 for f=r^2*ln(r) * 10 for multiquadric f=sqrt(r^2+alpha^2) * next column - basis function parameter: * alpha, for basis function type 10 * ignored (zero) for other basis function types * next column - point index in the original dataset, or -1 for an artificial node created by the solver. The algorithm may reorder the nodes, drop some nodes or add artificial nodes. Thus, one parsing this column should expect all these kinds of alterations in the dataset. NC - number of the centers V - polynomial term , array[NY,NX+1]. One row per one dimension of the function being modelled. First NX elements are linear coefficients, V[NX] is equal to the constant part. ModelVersion-version of the RBF model: * 1 - for models created by QNN and RBF-ML algorithms, compatible with ALGLIB 3.10 or earlier. * 2 - for models created by HierarchicalRBF, requires ALGLIB 3.11 or later * 3 - for models created by DDM-RBF, requires ALGLIB 3.19 or later -- ALGLIB -- Copyright 13.12.2011 by Bochkanov Sergey *************************************************************************/
void rbfunpack(rbfmodel &s, ae_int_t &nx, ae_int_t &ny, real_2d_array &xwr, ae_int_t &nc, real_2d_array &v, ae_int_t &modelversion, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void rbfunserialize(const std::string &s_in, rbfmodel &obj); void rbfunserialize(const std::istream &s_in, rbfmodel &obj);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example illustrates basic concepts of the RBF models: creation, modification,
        // evaluation.
        // 
        // Suppose that we have set of 2-dimensional points with associated
        // scalar function values, and we want to build a RBF model using
        // our data.
        // 
        // NOTE: we can work with 3D models too :)
        // 
        // Typical sequence of steps is given below:
        // 1. we create RBF model object
        // 2. we attach our dataset to the RBF model and tune algorithm settings
        // 3. we rebuild RBF model using QNN algorithm on new data
        // 4. we use RBF model (evaluate, serialize, etc.)
        //
        double v;

        //
        // Step 1: RBF model creation.
        //
        // We have to specify dimensionality of the space (2 or 3) and
        // dimensionality of the function (scalar or vector).
        //
        // New model is empty - it can be evaluated,
        // but we just get zero value at any point.
        //
        rbfmodel model;
        rbfcreate(2, 1, model);

        v = rbfcalc2(model, 0.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 0.000

        //
        // Step 2: we add dataset.
        //
        // XY contains two points - x0=(-1,0) and x1=(+1,0) -
        // and two function values f(x0)=2, f(x1)=3.
        //
        // We added points, but model was not rebuild yet.
        // If we call rbfcalc2(), we still will get 0.0 as result.
        //
        real_2d_array xy = "[[-1,0,2],[+1,0,3]]";
        rbfsetpoints(model, xy);

        v = rbfcalc2(model, 0.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 0.000

        //
        // Step 3: rebuild model
        //
        // After we've configured model, we should rebuild it -
        // it will change coefficients stored internally in the
        // rbfmodel structure.
        //
        // We use hierarchical RBF algorithm with following parameters:
        // * RBase - set to 1.0
        // * NLayers - three layers are used (although such simple problem
        //   does not need more than 1 layer)
        // * LambdaReg - is set to zero value, no smoothing is required
        //
        rbfreport rep;
        rbfsetalgohierarchical(model, 1.0, 3, 0.0);
        rbfbuildmodel(model, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1

        //
        // Step 4: model was built
        //
        // After call of rbfbuildmodel(), rbfcalc2() will return
        // value of the new model.
        //
        v = rbfcalc2(model, 0.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 2.500
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example show how to work with polynomial term
        // 
        // Suppose that we have set of 2-dimensional points with associated
        // scalar function values, and we want to build a RBF model using
        // our data.
        //
        // We use hierarchical RBF algorithm with following parameters:
        // * RBase - set to 1.0
        // * NLayers - three layers are used (although such simple problem
        //   does not need more than 1 layer)
        // * LambdaReg - is set to zero value, no smoothing is required
        //
        double v;
        rbfmodel model;
        real_2d_array xy = "[[-1,0,2],[+1,0,3]]";
        rbfreport rep;

        rbfcreate(2, 1, model);
        rbfsetpoints(model, xy);
        rbfsetalgohierarchical(model, 1.0, 3, 0.0);

        //
        // By default, RBF model uses linear term. It means that model
        // looks like
        //     f(x,y) = SUM(RBF[i]) + a*x + b*y + c
        // where RBF[i] is I-th radial basis function and a*x+by+c is a
        // linear term. Having linear terms in a model gives us:
        // (1) improved extrapolation properties
        // (2) linearity of the model when data can be perfectly fitted
        //     by the linear function
        // (3) linear asymptotic behavior
        //
        // Our simple dataset can be modelled by the linear function
        //     f(x,y) = 0.5*x + 2.5
        // and rbfbuildmodel() with default settings should preserve this
        // linearity.
        //
        ae_int_t nx;
        ae_int_t ny;
        ae_int_t nc;
        ae_int_t modelversion;
        real_2d_array xwr;
        real_2d_array c;
        rbfbuildmodel(model, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        rbfunpack(model, nx, ny, xwr, nc, c, modelversion);
        printf("%s\n", c.tostring(2).c_str()); // EXPECTED: [[0.500,0.000,2.500]]

        // asymptotic behavior of our function is linear
        v = rbfcalc2(model, 1000.0, 0.0);
        printf("%.1f\n", double(v)); // EXPECTED: 502.50

        //
        // Instead of linear term we can use constant term. In this case
        // we will get model which has form
        //     f(x,y) = SUM(RBF[i]) + c
        // where RBF[i] is I-th radial basis function and c is a constant,
        // which is equal to the average function value on the dataset.
        //
        // Because we've already attached dataset to the model the only
        // thing we have to do is to call rbfsetconstterm() and then
        // rebuild model with rbfbuildmodel().
        //
        rbfsetconstterm(model);
        rbfbuildmodel(model, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        rbfunpack(model, nx, ny, xwr, nc, c, modelversion);
        printf("%s\n", c.tostring(2).c_str()); // EXPECTED: [[0.000,0.000,2.500]]

        // asymptotic behavior of our function is constant
        v = rbfcalc2(model, 1000.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 2.500

        //
        // Finally, we can use zero term. Just plain RBF without polynomial
        // part:
        //     f(x,y) = SUM(RBF[i])
        // where RBF[i] is I-th radial basis function.
        //
        rbfsetzeroterm(model);
        rbfbuildmodel(model, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1
        rbfunpack(model, nx, ny, xwr, nc, c, modelversion);
        printf("%s\n", c.tostring(2).c_str()); // EXPECTED: [[0.000,0.000,0.000]]

        // asymptotic behavior of our function is just zero constant
        v = rbfcalc2(model, 1000.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 0.000
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example show how to serialize and unserialize RBF model
        // 
        // Suppose that we have set of 2-dimensional points with associated
        // scalar function values, and we want to build a RBF model using
        // our data. Then we want to serialize it to string and to unserialize
        // from string, loading to another instance of RBF model.
        //
        // Here we assume that you already know how to create RBF models.
        //
        std::string s;
        double v;
        rbfmodel model0;
        rbfmodel model1;
        real_2d_array xy = "[[-1,0,2],[+1,0,3]]";
        rbfreport rep;

        // model initialization
        rbfcreate(2, 1, model0);
        rbfsetpoints(model0, xy);
        rbfsetalgohierarchical(model0, 1.0, 3, 0.0);
        rbfbuildmodel(model0, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1

        //
        // Serialization - it looks easy,
        // but you should carefully read next section.
        //
        alglib::rbfserialize(model0, s);
        alglib::rbfunserialize(s, model1);

        // both models return same value
        v = rbfcalc2(model0, 0.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 2.500
        v = rbfcalc2(model1, 0.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 2.500

        //
        // Previous section shows that model state is saved/restored during
        // serialization. However, some properties are NOT serialized.
        //
        // Serialization saves/restores RBF model, but it does NOT saves/restores
        // settings which were used to build current model. In particular, dataset
        // which was used to build model, is not preserved.
        //
        // What does it mean in for us?
        //
        // Do you remember this sequence: rbfcreate-rbfsetpoints-rbfbuildmodel?
        // First step creates model, second step adds dataset and tunes model
        // settings, third step builds model using current dataset and model
        // construction settings.
        //
        // If you call rbfbuildmodel() without calling rbfsetpoints() first, you
        // will get empty (zero) RBF model. In our example, model0 contains
        // dataset which was added by rbfsetpoints() call. However, model1 does
        // NOT contain dataset - because dataset is NOT serialized.
        //
        // This, if we call rbfbuildmodel(model0,rep), we will get same model,
        // which returns 2.5 at (x,y)=(0,0). However, after same call model1 will
        // return zero - because it contains RBF model (coefficients), but does NOT
        // contain dataset which was used to build this model.
        //
        // Basically, it means that:
        // * serialization of the RBF model preserves anything related to the model
        //   EVALUATION
        // * but it does NOT creates perfect copy of the original object.
        //
        rbfbuildmodel(model0, rep);
        v = rbfcalc2(model0, 0.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 2.500

        rbfbuildmodel(model1, rep);
        v = rbfcalc2(model1, 0.0, 0.0);
        printf("%.2f\n", double(v)); // EXPECTED: 0.000
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Suppose that we have set of 2-dimensional points with associated VECTOR
        // function values, and we want to build a RBF model using our data.
        // 
        // Typical sequence of steps is given below:
        // 1. we create RBF model object
        // 2. we attach our dataset to the RBF model and tune algorithm settings
        // 3. we rebuild RBF model using new data
        // 4. we use RBF model (evaluate, serialize, etc.)
        //
        real_1d_array x;
        real_1d_array y;

        //
        // Step 1: RBF model creation.
        //
        // We have to specify dimensionality of the space (equal to 2) and
        // dimensionality of the function (2-dimensional vector function).
        //
        // New model is empty - it can be evaluated,
        // but we just get zero value at any point.
        //
        rbfmodel model;
        rbfcreate(2, 2, model);

        x = "[+1,+1]";
        rbfcalc(model, x, y);
        printf("%s\n", y.tostring(2).c_str()); // EXPECTED: [0.000,0.000]

        //
        // Step 2: we add dataset.
        //
        // XY arrays containt four points:
        // * (x0,y0) = (+1,+1), f(x0,y0)=(0,-1)
        // * (x1,y1) = (+1,-1), f(x1,y1)=(-1,0)
        // * (x2,y2) = (-1,-1), f(x2,y2)=(0,+1)
        // * (x3,y3) = (-1,+1), f(x3,y3)=(+1,0)
        //
        real_2d_array xy = "[[+1,+1,0,-1],[+1,-1,-1,0],[-1,-1,0,+1],[-1,+1,+1,0]]";
        rbfsetpoints(model, xy);

        // We added points, but model was not rebuild yet.
        // If we call rbfcalc(), we still will get 0.0 as result.
        rbfcalc(model, x, y);
        printf("%s\n", y.tostring(2).c_str()); // EXPECTED: [0.000,0.000]

        //
        // Step 3: rebuild model
        //
        // We use hierarchical RBF algorithm with following parameters:
        // * RBase - set to 1.0
        // * NLayers - three layers are used (although such simple problem
        //   does not need more than 1 layer)
        // * LambdaReg - is set to zero value, no smoothing is required
        //
        // After we've configured model, we should rebuild it -
        // it will change coefficients stored internally in the
        // rbfmodel structure.
        //
        rbfreport rep;
        rbfsetalgohierarchical(model, 1.0, 3, 0.0);
        rbfbuildmodel(model, rep);
        printf("%d\n", int(rep.terminationtype)); // EXPECTED: 1

        //
        // Step 4: model was built
        //
        // After call of rbfbuildmodel(), rbfcalc() will return
        // value of the new model.
        //
        rbfcalc(model, x, y);
        printf("%s\n", y.tostring(2).c_str()); // EXPECTED: [0.000,-1.000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

cmatrixlurcond1
cmatrixlurcondinf
cmatrixrcond1
cmatrixrcondinf
cmatrixtrrcond1
cmatrixtrrcondinf
hpdmatrixcholeskyrcond
hpdmatrixrcond
rmatrixlurcond1
rmatrixlurcondinf
rmatrixrcond1
rmatrixrcond2
rmatrixrcond2rect
rmatrixrcondinf
rmatrixtrrcond1
rmatrixtrrcond2
rmatrixtrrcondinf
spdmatrixcholeskyrcond
spdmatrixrcond
spdmatrixrcond2
/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the CMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double cmatrixlurcond1(const complex_2d_array &lua, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (infinity norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the CMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double cmatrixlurcondinf(const complex_2d_array &lua, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Estimate of a matrix condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double cmatrixrcond1(const complex_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double cmatrixrcondinf(const complex_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Triangular matrix: estimate of a condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array[0..N-1, 0..N-1]. N - size of A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double cmatrixtrrcond1(const complex_2d_array &a, const ae_int_t n, const bool isupper, const bool isunit, const xparams _xparams = alglib::xdefault);
/************************************************************************* Triangular matrix: estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double cmatrixtrrcondinf(const complex_2d_array &a, const ae_int_t n, const bool isupper, const bool isunit, const xparams _xparams = alglib::xdefault);
/************************************************************************* Condition number estimate of a Hermitian positive definite matrix given by Cholesky decomposition. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: CD - Cholesky decomposition of matrix A, output of SMatrixCholesky subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double hpdmatrixcholeskyrcond(const complex_2d_array &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Condition number estimate of a Hermitian positive definite matrix. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm of condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: A - Hermitian positive definite matrix which is given by its upper or lower triangle depending on the value of IsUpper. Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. Result: 1/LowerBound(cond(A)), if matrix A is positive definite, -1, if matrix A is not positive definite, and its condition number could not be found by this algorithm. NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double hpdmatrixrcond(const complex_2d_array &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the RMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double rmatrixlurcond1(const real_2d_array &lua, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Estimate of the condition number of a matrix given by its LU decomposition (infinity norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: LUA - LU decomposition of a matrix in compact form. Output of the RMatrixLU subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double rmatrixlurcondinf(const real_2d_array &lua, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Estimate of a matrix condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double rmatrixrcond1(const real_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Estimate of a matrix condition number (2-norm) The algorithm calculates exact 2-norm reciprocal condition number using SVD. Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/cond2(A) NOTE: if k(A) is very large, then the matrix is assumed to be degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double rmatrixrcond2(const real_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Estimate of a matrix condition number (2-norm) for a rectangular matrix. The algorithm calculates exact 2-norm reciprocal condition number using SVD. Input parameters: A - matrix. Array[M,N] M, N- rows and columns count, >=1 Result: 1/cond2(A) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double rmatrixrcond2rect(const real_2d_array &a, const ae_int_t m, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double rmatrixrcondinf(const real_2d_array &a, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* Triangular matrix: estimate of a condition number (1-norm) The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array[0..N-1, 0..N-1]. N - size of A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double rmatrixtrrcond1(const real_2d_array &a, const ae_int_t n, const bool isupper, const bool isunit, const xparams _xparams = alglib::xdefault);
/************************************************************************* Triangular matrix: reciprocal 2-norm condition number The algorithm calculates a reciprocal 2-norm condition number using SVD. Input parameters: A - matrix. Array[0..N-1, 0..N-1]. N - size of A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/cond(A) NOTE: if k(A) is very large, then matrix is assumed to be degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double rmatrixtrrcond2(const real_2d_array &a, const ae_int_t n, const bool isupper, const bool isunit, const xparams _xparams = alglib::xdefault);
/************************************************************************* Triangular matrix: estimate of a matrix condition number (infinity-norm). The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). Input parameters: A - matrix. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - True, if the matrix is upper triangular. IsUnit - True, if the matrix has a unit diagonal. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double rmatrixtrrcondinf(const real_2d_array &a, const ae_int_t n, const bool isupper, const bool isunit, const xparams _xparams = alglib::xdefault);
/************************************************************************* Condition number estimate of a symmetric positive definite matrix given by Cholesky decomposition. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: CD - Cholesky decomposition of matrix A, output of SMatrixCholesky subroutine. N - size of matrix A. Result: 1/LowerBound(cond(A)) NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double spdmatrixcholeskyrcond(const real_2d_array &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Condition number estimate of a symmetric positive definite matrix. The algorithm calculates a lower bound of the condition number. In this case, the algorithm does not return a lower bound of the condition number, but an inverse number (to avoid an overflow in case of a singular matrix). It should be noted that 1-norm and inf-norm of condition numbers of symmetric matrices are equal, so the algorithm doesn't take into account the differences between these types of norms. Input parameters: A - symmetric positive definite matrix which is given by its upper or lower triangle depending on the value of IsUpper. Array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - storage format. Result: 1/LowerBound(cond(A)), if matrix A is positive definite, -1, if matrix A is not positive definite, and its condition number could not be found by this algorithm. NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double spdmatrixrcond(const real_2d_array &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* 2-norm condition number of a symmetric positive definite matrix using EVD. Input parameters: A - symmetric positive definite matrix which is given by its upper or lower triangle depending on the value of IsUpper. Array[N,N] N - size of matrix A. IsUpper - storage format. Result: 1/cond(A), if matrix A is positive definite, 0, if matrix A is not positive definite NOTE: if k(A) is very large, then matrix is assumed degenerate, k(A)=INF, 0.0 is returned in such cases. *************************************************************************/
double spdmatrixrcond2(const real_2d_array &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault);
rmatrixschur
/************************************************************************* Subroutine performing the Schur decomposition of a general matrix by using the QR algorithm with multiple shifts. COMMERCIAL EDITION OF ALGLIB: ! Commercial version of ALGLIB includes one important improvement of ! this function, which can be used from C++ and C#: ! * Intel MKL support (lightweight Intel MKL is shipped with ALGLIB) ! ! Intel MKL gives approximately constant (with respect to number of ! worker threads) acceleration factor which depends on CPU being used, ! problem size and "baseline" ALGLIB edition which is used for ! comparison. ! ! Multithreaded acceleration is NOT supported for this function. ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. The source matrix A is represented as S'*A*S = T, where S is an orthogonal matrix (Schur vectors), T - upper quasi-triangular matrix (with blocks of sizes 1x1 and 2x2 on the main diagonal). Input parameters: A - matrix to be decomposed. Array whose indexes range within [0..N-1, 0..N-1]. N - size of A, N>=0. Output parameters: A - contains matrix T. Array whose indexes range within [0..N-1, 0..N-1]. S - contains Schur vectors. Array whose indexes range within [0..N-1, 0..N-1]. Note 1: The block structure of matrix T can be easily recognized: since all the elements below the blocks are zeros, the elements a[i+1,i] which are equal to 0 show the block border. Note 2: The algorithm performance depends on the value of the internal parameter NS of the InternalSchurDecomposition subroutine which defines the number of shifts in the QR algorithm (similarly to the block width in block-matrix algorithms in linear algebra). If you require maximum performance on your machine, it is recommended to adjust this parameter manually. Result: True, if the algorithm has converged and parameters A and S contain the result. False, if the algorithm has not converged. Algorithm implemented on the basis of the DHSEQR subroutine (LAPACK 3.0 library). *************************************************************************/
bool rmatrixschur(real_2d_array &a, const ae_int_t n, real_2d_array &s, const xparams _xparams = alglib::xdefault);
sparsebuffers
sparsematrix
sparseadd
sparseappendcompressedrow
sparseappendelement
sparseappendemptyrow
sparseappendmatrix
sparseconvertto
sparseconverttocrs
sparseconverttohash
sparseconverttosks
sparsecopy
sparsecopybuf
sparsecopytobuf
sparsecopytocrs
sparsecopytocrsbuf
sparsecopytohash
sparsecopytohashbuf
sparsecopytosks
sparsecopytosksbuf
sparsecopytransposecrs
sparsecopytransposecrsbuf
sparsecreate
sparsecreatebuf
sparsecreatecrs
sparsecreatecrsbuf
sparsecreatecrsempty
sparsecreatecrsemptybuf
sparsecreatecrsfromdense
sparsecreatecrsfromdensebuf
sparsecreatecrsfromdensev
sparsecreatecrsfromdensevbuf
sparsecreatesks
sparsecreatesksband
sparsecreatesksbandbuf
sparsecreatesksbuf
sparseenumerate
sparseexists
sparsefree
sparsegemv
sparseget
sparsegetcompressedrow
sparsegetdiagonal
sparsegetlowercount
sparsegetmatrixtype
sparsegetncols
sparsegetnrows
sparsegetrow
sparsegetuppercount
sparseiscrs
sparseishash
sparseissks
sparsemm
sparsemm2
sparsemtm
sparsemtv
sparsemultiplycolsby
sparsemultiplyrowsby
sparsemv
sparsemv2
sparseresizematrix
sparserewriteexisting
sparsescale
sparseserialize
sparseset
sparsesmm
sparsesmv
sparseswap
sparsesymmpermtbl
sparsesymmpermtblbuf
sparsesymmpermtbltranspose
sparsesymmpermtbltransposebuf
sparsetransposecrs
sparsetransposesks
sparsetrmv
sparsetrsv
sparseunserialize
sparsevsmv
sparse_d_1 Basic operations with sparse matrices
sparse_d_crs Advanced topic: creation in the CRS format.
/************************************************************************* Temporary buffers for sparse matrix operations. You should pass an instance of this structure to factorization functions. It allows to reuse memory during repeated sparse factorizations. You do not have to call some initialization function - simply passing an instance to factorization function is enough. *************************************************************************/
class sparsebuffers { public: sparsebuffers(); sparsebuffers(const sparsebuffers &rhs); sparsebuffers& operator=(const sparsebuffers &rhs); virtual ~sparsebuffers(); };
/************************************************************************* Sparse matrix structure. You should use ALGLIB functions to work with sparse matrix. Never try to access its fields directly! NOTES ON THE SPARSE STORAGE FORMATS Sparse matrices can be stored using several formats: * Hash-Table representation * Compressed Row Storage (CRS) * Skyline matrix storage (SKS) Each of the formats has benefits and drawbacks: * Hash-table is good for dynamic operations (insertion of new elements), but does not support linear algebra operations * CRS is good for operations like matrix-vector or matrix-matrix products, but its initialization is less convenient - you have to tell row sizes at the initialization, and you have to fill matrix only row by row, from left to right. * SKS is a special format which is used to store triangular factors from Cholesky factorization. It does not support dynamic modification, and support for linear algebra operations is very limited. Tables below outline information about these two formats: OPERATIONS WITH MATRIX HASH CRS SKS creation + + + SparseGet + + + SparseExists + + + SparseRewriteExisting + + + SparseSet + + + SparseAdd + SparseGetRow + + SparseGetCompressedRow + + SparseAppendCompressedRow + sparse-dense linear algebra + + *************************************************************************/
class sparsematrix { public: sparsematrix(); sparsematrix(const sparsematrix &rhs); sparsematrix& operator=(const sparsematrix &rhs); virtual ~sparsematrix(); };
/************************************************************************* This function adds value to S[i,j] - element of the sparse matrix. Matrix must be in a Hash-Table mode. In case S[i,j] already exists in the table, V i added to its value. In case S[i,j] is non-existent, it is inserted in the table. Table automatically grows when necessary. INPUT PARAMETERS S - sparse M*N matrix in Hash-Table representation. Exception will be thrown for CRS matrix. I - row index of the element to modify, 0<=I<M J - column index of the element to modify, 0<=J<N V - value to add, must be finite number OUTPUT PARAMETERS S - modified matrix NOTE 1: when S[i,j] is exactly zero after modification, it is deleted from the table. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparseadd(sparsematrix &s, const ae_int_t i, const ae_int_t j, const double v, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function appends a compressed sparse row to a CRS matrix, increasing its row count by 1. INPUT PARAMETERS: S - sparse M*N matrix in CRS format, including one created with sparsecreatecrsempty(). ColIdx - array[NZ], column indexes, values in [0,N-1] range. ColIdx[] can store non-distinct values; elements of Vals[] corresponding to duplicate column indexes will be summed up. Vals - array[NZ], element values. NZ - nonzeros count, NZ>=0. Both ColIdx[] and Vals[] can be longer than NZ, in which case only leading NZ elements are used. OUTPUT PARAMETERS: S - (M+1)*N matrix in the CRS format. NOTE: this function has amortized O(NZ*logNZ) cost. -- ALGLIB PROJECT -- Copyright 2024.02.19 by Bochkanov Sergey *************************************************************************/
void sparseappendcompressedrow(sparsematrix &s, const integer_1d_array &colidx, const real_1d_array &vals, const ae_int_t nz, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends an element to the last row of a CRS matrix. New elements can be added ONLY from left to right (column indexes are strictly increasing). INPUT PARAMETERS: S - a fully initialized sparse M*N matrix in CRS format, M>0 K - column index, 0<=K<N, must be strictly greater than the last element in the last row. V - element value OUTPUT PARAMETERS: S - M*N matrix in the CRS format. -- ALGLIB PROJECT -- Copyright 2024.02.19 by Bochkanov Sergey *************************************************************************/
void sparseappendelement(sparsematrix &s, const ae_int_t k, const double v, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends an empty row to a CRS matrix, increasing its rows count by 1. The newly added row can be modified with sparseappendelement(). The matrix is a valid CRS matrix at any moment of the process. INPUT PARAMETERS: S - sparse M*N matrix in CRS format, including one created with sparsecreatecrsempty(). OUTPUT PARAMETERS: S - (M+1)*N matrix in the CRS format. -- ALGLIB PROJECT -- Copyright 2024.02.19 by Bochkanov Sergey *************************************************************************/
void sparseappendemptyrow(sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends from below a sparse CRS-based matrix to another sparse CRS-based matrix. The matrix being appended must be completely initialized CRS matrix. INPUT PARAMETERS: SDst - sparse X*N matrix in CRS format, including one created with sparsecreatecrsempty (in the latter case, X=0). SSrc - sparse M*N matrix in the CRS format OUTPUT PARAMETERS: SDst - (X+M)*N matrix in the CRS format, SSrc appended from below NOTE: this function has amortized O(MSrc+NZCnt) cost, where NZCnt is a total number of nonzero elements in SSrc. -- ALGLIB PROJECT -- Copyright 2024.03.23 by Bochkanov Sergey *************************************************************************/
void sparseappendmatrix(sparsematrix &sdst, const sparsematrix &ssrc, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs in-place conversion to desired sparse storage format. INPUT PARAMETERS S0 - sparse matrix in any format. Fmt - desired storage format of the output, as returned by SparseGetMatrixType() function: * 0 for hash-based storage * 1 for CRS * 2 for SKS OUTPUT PARAMETERS S0 - sparse matrix in requested format. NOTE: in-place conversion wastes a lot of memory which is used to store temporaries. If you perform a lot of repeated conversions, we recommend to use out-of-place buffered conversion functions, like SparseCopyToBuf(), which can reuse already allocated memory. -- ALGLIB PROJECT -- Copyright 16.01.2014 by Bochkanov Sergey *************************************************************************/
void sparseconvertto(sparsematrix &s0, const ae_int_t fmt, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function converts matrix to CRS format. Some algorithms (linear algebra ones, for example) require matrices in CRS format. This function allows to perform in-place conversion. INPUT PARAMETERS S - sparse M*N matrix in any format OUTPUT PARAMETERS S - matrix in CRS format NOTE: this function has no effect when called with matrix which is already in CRS mode. NOTE: this function allocates temporary memory to store a copy of the matrix. If you perform a lot of repeated conversions, we recommend you to use SparseCopyToCRSBuf() function, which can reuse previously allocated memory. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparseconverttocrs(sparsematrix &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function performs in-place conversion to Hash table storage. INPUT PARAMETERS S - sparse matrix in CRS format. OUTPUT PARAMETERS S - sparse matrix in Hash table format. NOTE: this function has no effect when called with matrix which is already in Hash table mode. NOTE: in-place conversion involves allocation of temporary arrays. If you perform a lot of repeated in- place conversions, it may lead to memory fragmentation. Consider using out-of-place SparseCopyToHashBuf() function in this case. -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
void sparseconverttohash(sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs in-place conversion to SKS format. INPUT PARAMETERS S - sparse matrix in any format. OUTPUT PARAMETERS S - sparse matrix in SKS format. NOTE: this function has no effect when called with matrix which is already in SKS mode. NOTE: in-place conversion involves allocation of temporary arrays. If you perform a lot of repeated in- place conversions, it may lead to memory fragmentation. Consider using out-of-place SparseCopyToSKSBuf() function in this case. -- ALGLIB PROJECT -- Copyright 15.01.2014 by Bochkanov Sergey *************************************************************************/
void sparseconverttosks(sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function copies S0 to S1. This function completely deallocates memory owned by S1 before creating a copy of S0. If you want to reuse memory, use SparseCopyBuf. NOTE: this function does not verify its arguments, it just copies all fields of the structure. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsecopy(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function copies S0 to S1. Memory already allocated in S1 is reused as much as possible. NOTE: this function does not verify its arguments, it just copies all fields of the structure. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsecopybuf(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs out-of-place conversion to desired sparse storage format. S0 is copied to S1 and converted on-the-fly. Memory allocated in S1 is reused to maximum extent possible. INPUT PARAMETERS S0 - sparse matrix in any format. Fmt - desired storage format of the output, as returned by SparseGetMatrixType() function: * 0 for hash-based storage * 1 for CRS * 2 for SKS OUTPUT PARAMETERS S1 - sparse matrix in requested format. -- ALGLIB PROJECT -- Copyright 16.01.2014 by Bochkanov Sergey *************************************************************************/
void sparsecopytobuf(const sparsematrix &s0, const ae_int_t fmt, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs out-of-place conversion to CRS format. S0 is copied to S1 and converted on-the-fly. INPUT PARAMETERS S0 - sparse matrix in any format. OUTPUT PARAMETERS S1 - sparse matrix in CRS format. NOTE: if S0 is stored as CRS, it is just copied without conversion. NOTE: this function de-allocates memory occupied by S1 before starting CRS conversion. If you perform a lot of repeated CRS conversions, it may lead to memory fragmentation. In this case we recommend you to use SparseCopyToCRSBuf() function which re-uses memory in S1 as much as possible. -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
void sparsecopytocrs(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs out-of-place conversion to CRS format. S0 is copied to S1 and converted on-the-fly. Memory allocated in S1 is reused to maximum extent possible. INPUT PARAMETERS S0 - sparse matrix in any format. S1 - matrix which may contain some pre-allocated memory, or can be just uninitialized structure. OUTPUT PARAMETERS S1 - sparse matrix in CRS format. NOTE: if S0 is stored as CRS, it is just copied without conversion. -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
void sparsecopytocrsbuf(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs out-of-place conversion to Hash table storage format. S0 is copied to S1 and converted on-the-fly. INPUT PARAMETERS S0 - sparse matrix in any format. OUTPUT PARAMETERS S1 - sparse matrix in Hash table format. NOTE: if S0 is stored as Hash-table, it is just copied without conversion. NOTE: this function de-allocates memory occupied by S1 before starting conversion. If you perform a lot of repeated conversions, it may lead to memory fragmentation. In this case we recommend you to use SparseCopyToHashBuf() function which re-uses memory in S1 as much as possible. -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
void sparsecopytohash(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs out-of-place conversion to Hash table storage format. S0 is copied to S1 and converted on-the-fly. Memory allocated in S1 is reused to maximum extent possible. INPUT PARAMETERS S0 - sparse matrix in any format. OUTPUT PARAMETERS S1 - sparse matrix in Hash table format. NOTE: if S0 is stored as Hash-table, it is just copied without conversion. -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
void sparsecopytohashbuf(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs out-of-place conversion to SKS storage format. S0 is copied to S1 and converted on-the-fly. INPUT PARAMETERS S0 - sparse matrix in any format. OUTPUT PARAMETERS S1 - sparse matrix in SKS format. NOTE: if S0 is stored as SKS, it is just copied without conversion. NOTE: this function de-allocates memory occupied by S1 before starting conversion. If you perform a lot of repeated conversions, it may lead to memory fragmentation. In this case we recommend you to use SparseCopyToSKSBuf() function which re-uses memory in S1 as much as possible. -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
void sparsecopytosks(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs out-of-place conversion to SKS format. S0 is copied to S1 and converted on-the-fly. Memory allocated in S1 is reused to maximum extent possible. INPUT PARAMETERS S0 - sparse matrix in any format. OUTPUT PARAMETERS S1 - sparse matrix in SKS format. NOTE: if S0 is stored as SKS, it is just copied without conversion. -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
void sparsecopytosksbuf(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs copying with transposition of CRS matrix. INPUT PARAMETERS S0 - sparse matrix in CRS format. OUTPUT PARAMETERS S1 - sparse matrix, transposed -- ALGLIB PROJECT -- Copyright 23.07.2018 by Bochkanov Sergey *************************************************************************/
void sparsecopytransposecrs(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs copying with transposition of CRS matrix (buffered version which reuses memory already allocated by the target as much as possible). INPUT PARAMETERS S0 - sparse matrix in CRS format. OUTPUT PARAMETERS S1 - sparse matrix, transposed; previously allocated memory is reused if possible. -- ALGLIB PROJECT -- Copyright 23.07.2018 by Bochkanov Sergey *************************************************************************/
void sparsecopytransposecrsbuf(const sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates sparse matrix in a Hash-Table format. This function creates Hast-Table matrix, which can be converted to CRS format after its initialization is over. Typical usage scenario for a sparse matrix is: 1. creation in a Hash-Table format 2. insertion of the matrix elements 3. conversion to the CRS representation 4. matrix is passed to some linear algebra algorithm Some information about different matrix formats can be found below, in the "NOTES" section. INPUT PARAMETERS M - number of rows in a matrix, M>=1 N - number of columns in a matrix, N>=1 K - K>=0, expected number of non-zero elements in a matrix. K can be inexact approximation, can be less than actual number of elements (table will grow when needed) or even zero). It is important to understand that although hash-table may grow automatically, it is better to provide good estimate of data size. OUTPUT PARAMETERS S - sparse M*N matrix in Hash-Table representation. All elements of the matrix are zero. NOTE 1 Hash-tables use memory inefficiently, and they have to keep some amount of the "spare memory" in order to have good performance. Hash table for matrix with K non-zero elements will need C*K*(8+2*sizeof(int)) bytes, where C is a small constant, about 1.5-2 in magnitude. CRS storage, from the other side, is more memory-efficient, and needs just K*(8+sizeof(int))+M*sizeof(int) bytes, where M is a number of rows in a matrix. When you convert from the Hash-Table to CRS representation, all unneeded memory will be freed. NOTE 2 Comments of SparseMatrix structure outline information about different sparse storage formats. We recommend you to read them before starting to use ALGLIB sparse matrices. NOTE 3 This function completely overwrites S with new sparse matrix. Previously allocated storage is NOT reused. If you want to reuse already allocated memory, call SparseCreateBuf function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsecreate(const ae_int_t m, const ae_int_t n, const ae_int_t k, sparsematrix &s, const xparams _xparams = alglib::xdefault); void sparsecreate(const ae_int_t m, const ae_int_t n, sparsematrix &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This version of SparseCreate function creates sparse matrix in Hash-Table format, reusing previously allocated storage as much as possible. Read comments for SparseCreate() for more information. INPUT PARAMETERS M - number of rows in a matrix, M>=1 N - number of columns in a matrix, N>=1 K - K>=0, expected number of non-zero elements in a matrix. K can be inexact approximation, can be less than actual number of elements (table will grow when needed) or even zero). It is important to understand that although hash-table may grow automatically, it is better to provide good estimate of data size. S - SparseMatrix structure which MAY contain some already allocated storage. OUTPUT PARAMETERS S - sparse M*N matrix in Hash-Table representation. All elements of the matrix are zero. Previously allocated storage is reused, if its size is compatible with expected number of non-zeros K. -- ALGLIB PROJECT -- Copyright 14.01.2014 by Bochkanov Sergey *************************************************************************/
void sparsecreatebuf(const ae_int_t m, const ae_int_t n, const ae_int_t k, sparsematrix &s, const xparams _xparams = alglib::xdefault); void sparsecreatebuf(const ae_int_t m, const ae_int_t n, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates sparse matrix in a CRS format - the least flexible but the most efficient format implemented in ALGLIB. This function creates CRS matrix. Typical usage scenario for a CRS matrix is: 1. creation (you have to tell the number of non-zero elements at each row at this moment) 2. initialization of the matrix elements (row by row, from left to right) 3. the matrix is passed to some linear algebra algorithm This function is a memory-efficient alternative to SparseCreate(), but it is more complex because it requires you to know in advance how large your matrix is. Some information about different matrix formats can be found in comments on SparseMatrix structure. We recommend you to read them before starting to use ALGLIB sparse matrices. INPUT PARAMETERS M - number of rows in a matrix, M>=1 N - number of columns in a matrix, N>=1 NER - number of elements at each row, array[M], NER[I]>=0 OUTPUT PARAMETERS S - sparse M*N matrix in CRS representation. You have to fill ALL non-zero elements by calling SparseSet() BEFORE you try to use this matrix. NOTE: this function completely overwrites S with new sparse matrix. Previously allocated storage is NOT reused. If you want to reuse already allocated memory, call SparseCreateCRSBuf function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsecreatecrs(const ae_int_t m, const ae_int_t n, const integer_1d_array &ner, sparsematrix &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function creates sparse matrix in a CRS format (expert function for situations when you are running out of memory). This version of CRS matrix creation function may reuse memory already allocated in S. This function creates CRS matrix. Typical usage scenario for a CRS matrix is: 1. creation (you have to tell number of non-zero elements at each row at this moment) 2. insertion of the matrix elements (row by row, from left to right) 3. matrix is passed to some linear algebra algorithm This function is a memory-efficient alternative to SparseCreate(), but it is more complex because it requires you to know in advance how large your matrix is. Some information about different matrix formats can be found in comments on SparseMatrix structure. We recommend you to read them before starting to use ALGLIB sparse matrices.. INPUT PARAMETERS M - number of rows in a matrix, M>=1 N - number of columns in a matrix, N>=1 NER - number of elements at each row, array[M], NER[I]>=0 S - sparse matrix structure with possibly preallocated memory. OUTPUT PARAMETERS S - sparse M*N matrix in CRS representation. You have to fill ALL non-zero elements by calling SparseSet() BEFORE you try to use this matrix. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsecreatecrsbuf(const ae_int_t m, const ae_int_t n, const integer_1d_array &ner, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates an EMPTY sparse matrix stored in the CRS format. The empty matrix is a degenerate 0*N-dimensional matrix which can be used ONLY for: * appending rows with sparseappendcompressedrow() * appending non-degenerate CRS matrices with sparseappendmatrix() Before the first row is appended, the matrix is in a special intermediate state. After the first append it becomes a standard CRS matrix. The main purpose of this function is to simplify step-by-step initialization of CRS matrices. INPUT PARAMETERS N - number of columns in a matrix, N>=1 OUTPUT PARAMETERS S - sparse 0*N matrix in a partially initialized state NOTE: this function completely overwrites S with new sparse matrix. Previously allocated storage is NOT reused. If you want to reuse already allocated memory, call SparseCreateCRSEmptyBuf function. -- ALGLIB PROJECT -- Copyright 20.02.2024 by Bochkanov Sergey *************************************************************************/
void sparsecreatecrsempty(const ae_int_t n, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates an EMPTY sparse matrix stored in the CRS format. It is a buffered version of the function which reuses previosly allocated space as much as possible. INPUT PARAMETERS N - number of columns in a matrix, N>=1 OUTPUT PARAMETERS S - sparse 0*N matrix in a partially initialized state -- ALGLIB PROJECT -- Copyright 20.02.2024 by Bochkanov Sergey *************************************************************************/
void sparsecreatecrsemptybuf(const ae_int_t n, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates a CRS-based sparse matrix from the dense matrix. This function is intended for situations when you already have a dense matrix and need a convenient way of converting it to the CRS format. INPUT PARAMETERS A - array[M,N]. If larger, only leading MxN submatrix will be used. M - number of rows in a matrix, M>=1 N - number of columns in a matrix, N>=1 OUTPUT PARAMETERS S - sparse M*N matrix A in the CRS format NOTE: this function completely overwrites S with new sparse matrix. Previously allocated storage is NOT reused. If you want to reuse already allocated memory, call SparseCreateCRSFromDenseBuf function. -- ALGLIB PROJECT -- Copyright 16.06.2023 by Bochkanov Sergey *************************************************************************/
void sparsecreatecrsfromdense(const real_2d_array &a, const ae_int_t m, const ae_int_t n, sparsematrix &s, const xparams _xparams = alglib::xdefault); void sparsecreatecrsfromdense(const real_2d_array &a, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates a CRS-based sparse matrix from the dense matrix. A buffered version which reused memory already allocated in S as much as possible. This function is intended for situations when you already have a dense matrix and need a convenient way of converting it to the CRS format. INPUT PARAMETERS A - array[M,N]. If larger, only leading MxN submatrix will be used. M - number of rows in a matrix, M>=1 N - number of columns in a matrix, N>=1 S - an already allocated structure; if it already has enough memory to store the matrix, no new memory will be allocated. OUTPUT PARAMETERS S - sparse M*N matrix A in the CRS format. -- ALGLIB PROJECT -- Copyright 16.06.2023 by Bochkanov Sergey *************************************************************************/
void sparsecreatecrsfromdensebuf(const real_2d_array &a, const ae_int_t m, const ae_int_t n, sparsematrix &s, const xparams _xparams = alglib::xdefault); void sparsecreatecrsfromdensebuf(const real_2d_array &a, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates a CRS-based sparse matrix from a dense vector which stores a dense 1-dimensional representation of a dense M*N matrix. This function is intended for situations when you already have a dense vector and need a convenient way of converting it to the CRS format. INPUT PARAMETERS A - array[M*N]. If larger, only leading M*N elements will be used. M - number of rows in a matrix, M>=1 N - number of columns in a matrix, N>=1 OUTPUT PARAMETERS S - sparse M*N matrix A in the CRS format NOTE: this function completely overwrites S with new sparse matrix. Previously allocated storage is NOT reused. If you want to reuse already allocated memory, call SparseCreateCRSFromDenseBuf function. -- ALGLIB PROJECT -- Copyright 17.02.2024 by Bochkanov Sergey *************************************************************************/
void sparsecreatecrsfromdensev(const real_1d_array &a, const ae_int_t m, const ae_int_t n, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates a CRS-based sparse matrix from a dense vector which stores a dense 1-dimensional representation of a dense M*N matrix. A buffered version which reused memory already allocated in S as much as possible. This function is intended for situations when you already have a dense vector and need a convenient way of converting it to the CRS format. INPUT PARAMETERS A - array[M*N]. If larger, only leading M*N elements will be used. M - number of rows in a matrix, M>=1 N - number of columns in a matrix, N>=1 S - an already allocated structure; if it already has enough memory to store the matrix, no new memory will be allocated. OUTPUT PARAMETERS S - sparse M*N matrix A in the CRS format. -- ALGLIB PROJECT -- Copyright 16.06.2023 by Bochkanov Sergey *************************************************************************/
void sparsecreatecrsfromdensevbuf(const real_1d_array &a, const ae_int_t m, const ae_int_t n, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates sparse matrix in a SKS format (skyline storage format). In most cases you do not need this function - CRS format better suits most use cases. INPUT PARAMETERS M, N - number of rows(M) and columns (N) in a matrix: * M=N (as for now, ALGLIB supports only square SKS) * N>=1 * M>=1 D - "bottom" bandwidths, array[M], D[I]>=0. I-th element stores number of non-zeros at I-th row, below the diagonal (diagonal itself is not included) U - "top" bandwidths, array[N], U[I]>=0. I-th element stores number of non-zeros at I-th row, above the diagonal (diagonal itself is not included) OUTPUT PARAMETERS S - sparse M*N matrix in SKS representation. All elements are filled by zeros. You may use sparseset() to change their values. NOTE: this function completely overwrites S with new sparse matrix. Previously allocated storage is NOT reused. If you want to reuse already allocated memory, call SparseCreateSKSBuf function. -- ALGLIB PROJECT -- Copyright 13.01.2014 by Bochkanov Sergey *************************************************************************/
void sparsecreatesks(const ae_int_t m, const ae_int_t n, const integer_1d_array &d, const integer_1d_array &u, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates sparse matrix in a SKS format (skyline storage format). Unlike more general sparsecreatesks(), this function creates sparse matrix with constant bandwidth. You may want to use this function instead of sparsecreatesks() when your matrix has constant or nearly-constant bandwidth, and you want to simplify source code. INPUT PARAMETERS M, N - number of rows(M) and columns (N) in a matrix: * M=N (as for now, ALGLIB supports only square SKS) * N>=1 * M>=1 BW - matrix bandwidth, BW>=0 OUTPUT PARAMETERS S - sparse M*N matrix in SKS representation. All elements are filled by zeros. You may use sparseset() to change their values. NOTE: this function completely overwrites S with new sparse matrix. Previously allocated storage is NOT reused. If you want to reuse already allocated memory, call sparsecreatesksbandbuf function. -- ALGLIB PROJECT -- Copyright 25.12.2017 by Bochkanov Sergey *************************************************************************/
void sparsecreatesksband(const ae_int_t m, const ae_int_t n, const ae_int_t bw, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is "buffered" version of sparsecreatesksband() which reuses memory previously allocated in S (of course, memory is reallocated if needed). You may want to use this function instead of sparsecreatesksbuf() when your matrix has constant or nearly-constant bandwidth, and you want to simplify source code. INPUT PARAMETERS M, N - number of rows(M) and columns (N) in a matrix: * M=N (as for now, ALGLIB supports only square SKS) * N>=1 * M>=1 BW - bandwidth, BW>=0 OUTPUT PARAMETERS S - sparse M*N matrix in SKS representation. All elements are filled by zeros. You may use sparseset() to change their values. -- ALGLIB PROJECT -- Copyright 13.01.2014 by Bochkanov Sergey *************************************************************************/
void sparsecreatesksbandbuf(const ae_int_t m, const ae_int_t n, const ae_int_t bw, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is "buffered" version of SparseCreateSKS() which reuses memory previously allocated in S (of course, memory is reallocated if needed). This function creates sparse matrix in a SKS format (skyline storage format). In most cases you do not need this function - CRS format better suits most use cases. INPUT PARAMETERS M, N - number of rows(M) and columns (N) in a matrix: * M=N (as for now, ALGLIB supports only square SKS) * N>=1 * M>=1 D - "bottom" bandwidths, array[M], 0<=D[I]<=I. I-th element stores number of non-zeros at I-th row, below the diagonal (diagonal itself is not included) U - "top" bandwidths, array[N], 0<=U[I]<=I. I-th element stores number of non-zeros at I-th row, above the diagonal (diagonal itself is not included) OUTPUT PARAMETERS S - sparse M*N matrix in SKS representation. All elements are filled by zeros. You may use sparseset() to change their values. -- ALGLIB PROJECT -- Copyright 13.01.2014 by Bochkanov Sergey *************************************************************************/
void sparsecreatesksbuf(const ae_int_t m, const ae_int_t n, const integer_1d_array &d, const integer_1d_array &u, sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is used to enumerate all elements of the sparse matrix. Before first call user initializes T0 and T1 counters by zero. These counters are used to remember current position in a matrix; after each call they are updated by the function. Subsequent calls to this function return non-zero elements of the sparse matrix, one by one. If you enumerate CRS matrix, matrix is traversed from left to right, from top to bottom. In case you enumerate matrix stored as Hash table, elements are returned in random order. EXAMPLE > T0=0 > T1=0 > while SparseEnumerate(S,T0,T1,I,J,V) do > ....do something with I,J,V INPUT PARAMETERS S - sparse M*N matrix in Hash-Table or CRS representation. T0 - internal counter T1 - internal counter OUTPUT PARAMETERS T0 - new value of the internal counter T1 - new value of the internal counter I - row index of non-zero element, 0<=I<M. J - column index of non-zero element, 0<=J<N V - value of the T-th element RESULT True in case of success (next non-zero element was retrieved) False in case all non-zero elements were enumerated NOTE: you may call SparseRewriteExisting() during enumeration, but it is THE ONLY matrix modification function you can call!!! Other matrix modification functions should not be called during enumeration! -- ALGLIB PROJECT -- Copyright 14.03.2012 by Bochkanov Sergey *************************************************************************/
bool sparseenumerate(const sparsematrix &s, ae_int_t &t0, ae_int_t &t1, ae_int_t &i, ae_int_t &j, double &v, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function checks whether S[i,j] is present in the sparse matrix. It returns True even for elements that are numerically zero (but still have place allocated for them). The matrix can be in any mode (Hash-Table, CRS, SKS), but this function is less efficient for CRS matrices. Hash-Table and SKS matrices can find element in O(1) time, while CRS matrices need O(log(RS)) time, where RS is an number of non-zero elements in a row. INPUT PARAMETERS S - sparse M*N matrix I - row index of the element to modify, 0<=I<M J - column index of the element to modify, 0<=J<N RESULT whether S[I,J] is present in the data structure or not -- ALGLIB PROJECT -- Copyright 14.10.2020 by Bochkanov Sergey *************************************************************************/
bool sparseexists(const sparsematrix &s, const ae_int_t i, const ae_int_t j, const xparams _xparams = alglib::xdefault);
/************************************************************************* The function frees all memory occupied by sparse matrix. Sparse matrix structure becomes unusable after this call. OUTPUT PARAMETERS S - sparse matrix to delete -- ALGLIB PROJECT -- Copyright 24.07.2012 by Bochkanov Sergey *************************************************************************/
void sparsefree(sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates generalized sparse matrix-vector product y := alpha*op(S)*x + beta*y Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). op(S) can be either S or S^T. NOTE: this function expects Y to be large enough to store result. No automatic preallocation happens for smaller arrays. INPUT PARAMETERS S - sparse matrix in CRS or SKS format. Alpha - source coefficient OpS - operation type: * OpS=0 => op(S) = S * OpS=1 => op(S) = S^T X - input vector, must have at least Cols(op(S))+IX elements IX - subvector offset Beta - destination coefficient Y - preallocated output array, must have at least Rows(op(S))+IY elements IY - subvector offset OUTPUT PARAMETERS Y - elements [IY...IY+Rows(op(S))-1] are replaced by result, other elements are not modified HANDLING OF SPECIAL CASES: * below M=Rows(op(S)) and N=Cols(op(S)). Although current ALGLIB version does not allow you to create zero-sized sparse matrices, internally ALGLIB can deal with such matrices. So, comments for M or N equal to zero are for internal use only. * if M=0, then subroutine does nothing. It does not even touch arrays. * if N=0 or Alpha=0.0, then: * if Beta=0, then Y is filled by zeros. S and X are not referenced at all. Initial values of Y are ignored (we do not multiply Y by zero, we just rewrite it by zeros) * if Beta<>0, then Y is replaced by Beta*Y * if M>0, N>0, Alpha<>0, but Beta=0, then Y is replaced by alpha*op(S)*x initial state of Y is ignored (rewritten without initial multiplication by zeros). NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 10.12.2019 by Bochkanov Sergey *************************************************************************/
void sparsegemv(const sparsematrix &s, const double alpha, const ae_int_t ops, const real_1d_array &x, const ae_int_t ix, const double beta, real_1d_array &y, const ae_int_t iy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns S[i,j] - element of the sparse matrix. Matrix can be in any mode (Hash-Table, CRS, SKS), but this function is less efficient for CRS matrices. Hash-Table and SKS matrices can find element in O(1) time, while CRS matrices need O(log(RS)) time, where RS is an number of non-zero elements in a row. INPUT PARAMETERS S - sparse M*N matrix I - row index of the element to modify, 0<=I<M J - column index of the element to modify, 0<=J<N RESULT value of S[I,J] or zero (in case no element with such index is found) -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
double sparseget(const sparsematrix &s, const ae_int_t i, const ae_int_t j, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function returns I-th row of the sparse matrix IN COMPRESSED FORMAT - only non-zero elements are returned (with their indexes). Matrix must be stored in CRS or SKS format. INPUT PARAMETERS: S - sparse M*N matrix in CRS format I - row index, 0<=I<M ColIdx - output buffer for column indexes, can be preallocated. In case buffer size is too small to store I-th row, it is automatically reallocated. Vals - output buffer for values, can be preallocated. In case buffer size is too small to store I-th row, it is automatically reallocated. OUTPUT PARAMETERS: ColIdx - column indexes of non-zero elements, sorted by ascending. Symbolically non-zero elements are counted (i.e. if you allocated place for element, but it has zero numerical value - it is counted). Vals - values. Vals[K] stores value of matrix element with indexes (I,ColIdx[K]). Symbolically non-zero elements are counted (i.e. if you allocated place for element, but it has zero numerical value - it is counted). NZCnt - number of symbolically non-zero elements per row. NOTE: when incorrect I (outside of [0,M-1]) or matrix (non CRS/SKS) is passed, this function throws exception. NOTE: this function may allocate additional, unnecessary place for ColIdx and Vals arrays. It is dictated by performance reasons - on SKS matrices it is faster to allocate space at the beginning with some "extra"-space, than performing two passes over matrix - first time to calculate exact space required for data, second time - to store data itself. -- ALGLIB PROJECT -- Copyright 10.12.2014 by Bochkanov Sergey *************************************************************************/
void sparsegetcompressedrow(const sparsematrix &s, const ae_int_t i, integer_1d_array &colidx, real_1d_array &vals, ae_int_t &nzcnt, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns I-th diagonal element of the sparse matrix. Matrix can be in any mode (Hash-Table or CRS storage), but this function is most efficient for CRS matrices - it requires less than 50 CPU cycles to extract diagonal element. For Hash-Table matrices we still have O(1) query time, but function is many times slower. INPUT PARAMETERS S - sparse M*N matrix in Hash-Table representation. Exception will be thrown for CRS matrix. I - index of the element to modify, 0<=I<min(M,N) RESULT value of S[I,I] or zero (in case no element with such index is found) -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
double sparsegetdiagonal(const sparsematrix &s, const ae_int_t i, const xparams _xparams = alglib::xdefault);
/************************************************************************* The function returns number of strictly lower triangular non-zero elements in the matrix. It counts SYMBOLICALLY non-zero elements, i.e. entries in the sparse matrix data structure. If some element has zero numerical value, it is still counted. This function has different cost for different types of matrices: * for hash-based matrices it involves complete pass over entire hash-table with O(NNZ) cost, where NNZ is number of non-zero elements * for CRS and SKS matrix types cost of counting is O(N) (N - matrix size). RESULT: number of non-zero elements strictly below main diagonal -- ALGLIB PROJECT -- Copyright 12.02.2014 by Bochkanov Sergey *************************************************************************/
ae_int_t sparsegetlowercount(const sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns type of the matrix storage format. INPUT PARAMETERS: S - sparse matrix. RESULT: sparse storage format used by matrix: 0 - Hash-table 1 - CRS (compressed row storage) 2 - SKS (skyline) NOTE: future versions of ALGLIB may include additional sparse storage formats. -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
ae_int_t sparsegetmatrixtype(const sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* The function returns number of columns of a sparse matrix. RESULT: number of columns of a sparse matrix. -- ALGLIB PROJECT -- Copyright 23.08.2012 by Bochkanov Sergey *************************************************************************/
ae_int_t sparsegetncols(const sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* The function returns number of rows of a sparse matrix. RESULT: number of rows of a sparse matrix. -- ALGLIB PROJECT -- Copyright 23.08.2012 by Bochkanov Sergey *************************************************************************/
ae_int_t sparsegetnrows(const sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns I-th row of the sparse matrix. Matrix must be stored in CRS or SKS format. INPUT PARAMETERS: S - sparse M*N matrix in CRS format I - row index, 0<=I<M IRow - output buffer, can be preallocated. In case buffer size is too small to store I-th row, it is automatically reallocated. OUTPUT PARAMETERS: IRow - array[M], I-th row. NOTE: this function has O(N) running time, where N is a column count. It allocates and fills N-element array, even although most of its elemets are zero. NOTE: If you have O(non-zeros-per-row) time and memory requirements, use SparseGetCompressedRow() function. It returns data in compressed format. NOTE: when incorrect I (outside of [0,M-1]) or matrix (non CRS/SKS) is passed, this function throws exception. -- ALGLIB PROJECT -- Copyright 10.12.2014 by Bochkanov Sergey *************************************************************************/
void sparsegetrow(const sparsematrix &s, const ae_int_t i, real_1d_array &irow, const xparams _xparams = alglib::xdefault);
/************************************************************************* The function returns number of strictly upper triangular non-zero elements in the matrix. It counts SYMBOLICALLY non-zero elements, i.e. entries in the sparse matrix data structure. If some element has zero numerical value, it is still counted. This function has different cost for different types of matrices: * for hash-based matrices it involves complete pass over entire hash-table with O(NNZ) cost, where NNZ is number of non-zero elements * for CRS and SKS matrix types cost of counting is O(N) (N - matrix size). RESULT: number of non-zero elements strictly above main diagonal -- ALGLIB PROJECT -- Copyright 12.02.2014 by Bochkanov Sergey *************************************************************************/
ae_int_t sparsegetuppercount(const sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function checks matrix storage format and returns True when matrix is stored using CRS representation. INPUT PARAMETERS: S - sparse matrix. RESULT: True if matrix type is CRS False if matrix type is not CRS -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
bool sparseiscrs(const sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function checks matrix storage format and returns True when matrix is stored using Hash table representation. INPUT PARAMETERS: S - sparse matrix. RESULT: True if matrix type is Hash table False if matrix type is not Hash table -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
bool sparseishash(const sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function checks matrix storage format and returns True when matrix is stored using SKS representation. INPUT PARAMETERS: S - sparse matrix. RESULT: True if matrix type is SKS False if matrix type is not SKS -- ALGLIB PROJECT -- Copyright 20.07.2012 by Bochkanov Sergey *************************************************************************/
bool sparseissks(const sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates matrix-matrix product S*A. Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse M*N matrix in CRS or SKS format. A - array[N][K], input dense matrix. For performance reasons we make only quick checks - we check that array size is at least N, but we do not check for NAN's or INF's. K - number of columns of matrix (A). B - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. OUTPUT PARAMETERS B - array[M][K], S*A NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsemm(const sparsematrix &s, const real_2d_array &a, const ae_int_t k, real_2d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function simultaneously calculates two matrix-matrix products: S*A and S^T*A. S must be square (non-rectangular) matrix stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse N*N matrix in CRS or SKS format. A - array[N][K], input dense matrix. For performance reasons we make only quick checks - we check that array size is at least N, but we do not check for NAN's or INF's. K - number of columns of matrix (A). B0 - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. B1 - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. OUTPUT PARAMETERS B0 - array[N][K], S*A B1 - array[N][K], S^T*A NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsemm2(const sparsematrix &s, const real_2d_array &a, const ae_int_t k, real_2d_array &b0, real_2d_array &b1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates matrix-matrix product S^T*A. Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse M*N matrix in CRS or SKS format. A - array[M][K], input dense matrix. For performance reasons we make only quick checks - we check that array size is at least M, but we do not check for NAN's or INF's. K - number of columns of matrix (A). B - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. OUTPUT PARAMETERS B - array[N][K], S^T*A NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsemtm(const sparsematrix &s, const real_2d_array &a, const ae_int_t k, real_2d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates matrix-vector product S^T*x. Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse M*N matrix in CRS or SKS format. X - array[M], input vector. For performance reasons we make only quick checks - we check that array size is at least M, but we do not check for NAN's or INF's. Y - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. OUTPUT PARAMETERS Y - array[N], S^T*x NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsemtv(const sparsematrix &s, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function perform in-place multiplication of the matrix columns by a user-supplied vector X. The matrix S must be stored in CRS format. INPUT PARAMETERS S - sparse M*N matrix in CRS format. X - array[N], coefficients vector. OUTPUT PARAMETERS S - in-place multiplied by diag(X) from the right NOTE: this function throws exception when called for a non-CRS matrix. You must convert your matrix with SparseConvertToCRS() before using this function. -- ALGLIB PROJECT -- Copyright 17.02.2024 by Bochkanov Sergey *************************************************************************/
void sparsemultiplycolsby(sparsematrix &s, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function perform in-place multiplication of the matrix rows by a user-supplied vector X. The matrix S must be stored in CRS format. INPUT PARAMETERS S - sparse M*N matrix in CRS format. X - array[M], coefficients vector. OUTPUT PARAMETERS S - in-place multiplied by diag(X) from the left NOTE: this function throws exception when called for a non-CRS matrix. You must convert your matrix with SparseConvertToCRS() before using this function. -- ALGLIB PROJECT -- Copyright 17.02.2024 by Bochkanov Sergey *************************************************************************/
void sparsemultiplyrowsby(sparsematrix &s, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates matrix-vector product S*x. Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse M*N matrix in CRS or SKS format. X - array[N], input vector. For performance reasons we make only quick checks - we check that array size is at least N, but we do not check for NAN's or INF's. Y - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. OUTPUT PARAMETERS Y - array[M], S*x NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsemv(const sparsematrix &s, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function simultaneously calculates two matrix-vector products: S*x and S^T*x. S must be square (non-rectangular) matrix stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse N*N matrix in CRS or SKS format. X - array[N], input vector. For performance reasons we make only quick checks - we check that array size is at least N, but we do not check for NAN's or INF's. Y0 - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. Y1 - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. OUTPUT PARAMETERS Y0 - array[N], S*x Y1 - array[N], S^T*x NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsemv2(const sparsematrix &s, const real_1d_array &x, real_1d_array &y0, real_1d_array &y1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This procedure resizes Hash-Table matrix. It can be called when you have deleted too many elements from the matrix, and you want to free unneeded memory. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparseresizematrix(sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function rewrites existing (non-zero) element. It returns True if element exists or False, when it is called for non-existing (zero) element. This function works with any kind of the matrix. The purpose of this function is to provide convenient thread-safe way to modify sparse matrix. Such modification (already existing element is rewritten) is guaranteed to be thread-safe without any synchronization, as long as different threads modify different elements. INPUT PARAMETERS S - sparse M*N matrix in any kind of representation (Hash, SKS, CRS). I - row index of non-zero element to modify, 0<=I<M J - column index of non-zero element to modify, 0<=J<N V - value to rewrite, must be finite number OUTPUT PARAMETERS S - modified matrix RESULT True in case when element exists False in case when element doesn't exist or it is zero -- ALGLIB PROJECT -- Copyright 14.03.2012 by Bochkanov Sergey *************************************************************************/
bool sparserewriteexisting(sparsematrix &s, const ae_int_t i, const ae_int_t j, const double v, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs an in-place matrix conditioning scaling such that A = R*Z*C where A is an original matrix, R and C are diagonal scaling matrices, and Z is a scaled matrix. Z replaces A, R and C are returned as 1D arrays. INPUT PARAMETERS S - sparse M*N matrix in CRS format. SclType - scaling type: * 0 for automatically chosen scaling * 1 for equilibration scaling ScaleRows - if False, rows are not scaled (R=identity) ScaleCols - if False, cols are not scaled (C=identity) ColsFirst - scale columns first. If False, rows are scaled prior to scaling columns. Ignored for ScaleCols=False. OUTPUT PARAMETERS R - array[M], row scales, R[i]>0 C - array[N], col scales, C[i]>0 NOTE: this function throws exception when called for a non-CRS matrix. You must convert your matrix with SparseConvertToCRS() before using this function. NOTE: this function works with general (nonsymmetric) matrices. See sparsesymmscale() for a symmetric version. See sparsescalebuf() for a version which reuses space already present in output arrays R/C. NOTE: if both ScaleRows=False and ScaleCols=False, this function returns an identity scaling. NOTE: R[] and C[] are guaranteed to be strictly positive. When the matrix has zero rows/cols, corresponding elements of R/C are set to 1. -- ALGLIB PROJECT -- Copyright 12.11.2023 by Bochkanov Sergey *************************************************************************/
void sparsescale(sparsematrix &s, const ae_int_t scltype, const bool scalerows, const bool scalecols, const bool colsfirst, real_1d_array &r, real_1d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void sparseserialize(const sparsematrix &obj, std::string &s_out); void sparseserialize(const sparsematrix &obj, std::ostream &s_out);
/************************************************************************* This function modifies S[i,j] - element of the sparse matrix. For Hash-based storage format: * this function can be called at any moment - during matrix initialization or later * new value can be zero or non-zero. In case new value of S[i,j] is zero, this element is deleted from the table. * this function has no effect when called with zero V for non-existent element. For CRS-bases storage format: * this function can be called ONLY DURING MATRIX INITIALIZATION * zero values are stored in the matrix similarly to non-zero ones * elements must be initialized in correct order - from top row to bottom, within row - from left to right. For SKS storage: * this function can be called at any moment - during matrix initialization or later * zero values are stored in the matrix similarly to non-zero ones * this function CAN NOT be called for non-existent (outside of the band specified during SKS matrix creation) elements. Say, if you created SKS matrix with bandwidth=2 and tried to call sparseset(s,0,10,VAL), an exception will be generated. INPUT PARAMETERS S - sparse M*N matrix in Hash-Table, SKS or CRS format. I - row index of the element to modify, 0<=I<M J - column index of the element to modify, 0<=J<N V - value to set, must be finite number, can be zero OUTPUT PARAMETERS S - modified matrix -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparseset(sparsematrix &s, const ae_int_t i, const ae_int_t j, const double v, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function calculates matrix-matrix product S*A, when S is symmetric matrix. Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse M*M matrix in CRS or SKS format. IsUpper - whether upper or lower triangle of S is given: * if upper triangle is given, only S[i,j] for j>=i are used, and lower triangle is ignored (it can be empty - these elements are not referenced at all). * if lower triangle is given, only S[i,j] for j<=i are used, and upper triangle is ignored. A - array[N][K], input dense matrix. For performance reasons we make only quick checks - we check that array size is at least N, but we do not check for NAN's or INF's. K - number of columns of matrix (A). B - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. OUTPUT PARAMETERS B - array[M][K], S*A NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsesmm(const sparsematrix &s, const bool isupper, const real_2d_array &a, const ae_int_t k, real_2d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates matrix-vector product S*x, when S is symmetric matrix. Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse M*M matrix in CRS or SKS format. IsUpper - whether upper or lower triangle of S is given: * if upper triangle is given, only S[i,j] for j>=i are used, and lower triangle is ignored (it can be empty - these elements are not referenced at all). * if lower triangle is given, only S[i,j] for j<=i are used, and upper triangle is ignored. X - array[N], input vector. For performance reasons we make only quick checks - we check that array size is at least N, but we do not check for NAN's or INF's. Y - output buffer, possibly preallocated. In case buffer size is too small to store result, this buffer is automatically resized. OUTPUT PARAMETERS Y - array[M], S*x NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 14.10.2011 by Bochkanov Sergey *************************************************************************/
void sparsesmv(const sparsematrix &s, const bool isupper, const real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function efficiently swaps contents of S0 and S1. -- ALGLIB PROJECT -- Copyright 16.01.2014 by Bochkanov Sergey *************************************************************************/
void sparseswap(sparsematrix &s0, sparsematrix &s1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function applies permutation given by permutation table P (as opposed to product form of permutation) to sparse symmetric matrix A, given by either upper or lower triangle: B := P*A*P'. This function allocates completely new instance of B. Use buffered version SparseSymmPermTblBuf() if you want to reuse already allocated structure. INPUT PARAMETERS A - sparse square matrix in CRS format. IsUpper - whether upper or lower triangle of A is used: * if upper triangle is given, only A[i,j] for j>=i are used, and lower triangle is ignored (it can be empty - these elements are not referenced at all). * if lower triangle is given, only A[i,j] for j<=i are used, and upper triangle is ignored. P - array[N] which stores permutation table; P[I]=J means that I-th row/column of matrix A is moved to J-th position. For performance reasons we do NOT check that P[] is a correct permutation (that there is no repetitions, just that all its elements are in [0,N) range. OUTPUT PARAMETERS B - permuted matrix. Permutation is applied to A from the both sides, only upper or lower triangle (depending on IsUpper) is stored. NOTE: this function throws exception when called for non-CRS matrix. You must convert your matrix with SparseConvertToCRS() before using this function. -- ALGLIB PROJECT -- Copyright 05.10.2020 by Bochkanov Sergey. *************************************************************************/
void sparsesymmpermtbl(const sparsematrix &a, const bool isupper, const integer_1d_array &p, sparsematrix &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function is a buffered version of SparseSymmPermTbl() that reuses previously allocated storage in B as much as possible. This function applies permutation given by permutation table P (as opposed to product form of permutation) to sparse symmetric matrix A, given by either upper or lower triangle: B := P*A*P'. INPUT PARAMETERS A - sparse square matrix in CRS format. IsUpper - whether upper or lower triangle of A is used: * if upper triangle is given, only A[i,j] for j>=i are used, and lower triangle is ignored (it can be empty - these elements are not referenced at all). * if lower triangle is given, only A[i,j] for j<=i are used, and upper triangle is ignored. P - array[N] which stores permutation table; P[I]=J means that I-th row/column of matrix A is moved to J-th position. For performance reasons we do NOT check that P[] is a correct permutation (that there is no repetitions, just that all its elements are in [0,N) range. B - sparse matrix object that will hold the result. Previously allocated memory will be reused as much as possible. OUTPUT PARAMETERS B - permuted matrix. Permutation is applied to A from the both sides, only upper or lower triangle (depending on IsUpper) is stored. NOTE: this function throws exception when called for non-CRS matrix. You must convert your matrix with SparseConvertToCRS() before using this function. -- ALGLIB PROJECT -- Copyright 05.10.2020 by Bochkanov Sergey. *************************************************************************/
void sparsesymmpermtblbuf(const sparsematrix &a, const bool isupper, const integer_1d_array &p, sparsematrix &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function applies permutation given by permutation table P (as opposed to product form of permutation) to sparse symmetric matrix A, given by either upper or lower triangle: B := P*A*P'. It outputs TRANSPOSED matrix, i.e. if A is given by the lower triangle then B is given by the upper one, and vice versa. This function allocates completely new instance of B. Use buffered version SparseSymmPermTblTransposeBuf() if you want to reuse an already allocated structure. INPUT PARAMETERS A - sparse square matrix in CRS format. IsUpper - whether upper or lower triangle of A is used: * if upper triangle is given, only A[i,j] for j>=i are used, and lower triangle is ignored (it can be empty - these elements are not referenced at all). * if lower triangle is given, only A[i,j] for j<=i are used, and upper triangle is ignored. P - array[N] which stores permutation table; P[I]=J means that I-th row/column of matrix A is moved to J-th position. For performance reasons we do NOT check that P[] is a correct permutation (that there is no repetitions, just that all its elements are in [0,N) range. OUTPUT PARAMETERS B - permuted matrix. Permutation is applied to A from the both sides, only triangle OPPOSITE to that of A is returned: a lower one if IsUpper=True, and an upper one otherwise. NOTE: this function throws exception when called for non-CRS matrix. You must convert your matrix with SparseConvertToCRS() before using this function. -- ALGLIB PROJECT -- Copyright 24.080.2024 by Bochkanov Sergey. *************************************************************************/
void sparsesymmpermtbltranspose(const sparsematrix &a, const bool isupper, const integer_1d_array &p, sparsematrix &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function applies permutation given by permutation table P (as opposed to product form of permutation) to sparse symmetric matrix A, given by either upper or lower triangle: B := P*A*P'. It outputs TRANSPOSED matrix, i.e. if A is given by the lower triangle then B is given by the upper one, and vice versa. This function reuses memory already allocated in B as much as possible. INPUT PARAMETERS A - sparse square matrix in CRS format. IsUpper - whether upper or lower triangle of A is used: * if upper triangle is given, only A[i,j] for j>=i are used, and lower triangle is ignored (it can be empty - these elements are not referenced at all). * if lower triangle is given, only A[i,j] for j<=i are used, and upper triangle is ignored. P - array[N] which stores permutation table; P[I]=J means that I-th row/column of matrix A is moved to J-th position. For performance reasons we do NOT check that P[] is a correct permutation (that there is no repetitions, just that all its elements are in [0,N) range. B - sparse matrix object that will hold the result. Previously allocated memory will be reused as much as possible. OUTPUT PARAMETERS B - permuted matrix. Permutation is applied to A from the both sides, only triangle OPPOSITE to that of A is returned: a lower one if IsUpper=True, and an upper one otherwise. NOTE: this function throws exception when called for non-CRS matrix. You must convert your matrix with SparseConvertToCRS() before using this function. -- ALGLIB PROJECT -- Copyright 24.080.2024 by Bochkanov Sergey. *************************************************************************/
void sparsesymmpermtbltransposebuf(const sparsematrix &a, const bool isupper, const integer_1d_array &p, sparsematrix &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs transpose of CRS matrix. INPUT PARAMETERS S - sparse matrix in CRS format. OUTPUT PARAMETERS S - sparse matrix, transposed. NOTE: internal temporary copy is allocated for the purposes of transposition. It is deallocated after transposition. -- ALGLIB PROJECT -- Copyright 30.01.2018 by Bochkanov Sergey *************************************************************************/
void sparsetransposecrs(sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function performs efficient in-place transpose of SKS matrix. No additional memory is allocated during transposition. This function supports only skyline storage format (SKS). INPUT PARAMETERS S - sparse matrix in SKS format. OUTPUT PARAMETERS S - sparse matrix, transposed. -- ALGLIB PROJECT -- Copyright 16.01.2014 by Bochkanov Sergey *************************************************************************/
void sparsetransposesks(sparsematrix &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function calculates matrix-vector product op(S)*x, when x is vector, S is symmetric triangular matrix, op(S) is transposition or no operation. Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse square matrix in CRS or SKS format. IsUpper - whether upper or lower triangle of S is used: * if upper triangle is given, only S[i,j] for j>=i are used, and lower triangle is ignored (it can be empty - these elements are not referenced at all). * if lower triangle is given, only S[i,j] for j<=i are used, and upper triangle is ignored. IsUnit - unit or non-unit diagonal: * if True, diagonal elements of triangular matrix are considered equal to 1.0. Actual elements stored in S are not referenced at all. * if False, diagonal stored in S is used OpType - operation type: * if 0, S*x is calculated * if 1, (S^T)*x is calculated (transposition) X - array[N] which stores input vector. For performance reasons we make only quick checks - we check that array size is at least N, but we do not check for NAN's or INF's. Y - possibly preallocated input buffer. Automatically resized if its size is too small. OUTPUT PARAMETERS Y - array[N], op(S)*x NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 20.01.2014 by Bochkanov Sergey *************************************************************************/
void sparsetrmv(const sparsematrix &s, const bool isupper, const bool isunit, const ae_int_t optype, real_1d_array &x, real_1d_array &y, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function solves linear system op(S)*y=x where x is vector, S is symmetric triangular matrix, op(S) is transposition or no operation. Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse square matrix in CRS or SKS format. IsUpper - whether upper or lower triangle of S is used: * if upper triangle is given, only S[i,j] for j>=i are used, and lower triangle is ignored (it can be empty - these elements are not referenced at all). * if lower triangle is given, only S[i,j] for j<=i are used, and upper triangle is ignored. IsUnit - unit or non-unit diagonal: * if True, diagonal elements of triangular matrix are considered equal to 1.0. Actual elements stored in S are not referenced at all. * if False, diagonal stored in S is used. It is your responsibility to make sure that diagonal is non-zero. OpType - operation type: * if 0, S*x is calculated * if 1, (S^T)*x is calculated (transposition) X - array[N] which stores input vector. For performance reasons we make only quick checks - we check that array size is at least N, but we do not check for NAN's or INF's. OUTPUT PARAMETERS X - array[N], inv(op(S))*x NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. NOTE: no assertion or tests are done during algorithm operation. It is your responsibility to provide invertible matrix to algorithm. -- ALGLIB PROJECT -- Copyright 20.01.2014 by Bochkanov Sergey *************************************************************************/
void sparsetrsv(const sparsematrix &s, const bool isupper, const bool isunit, const ae_int_t optype, real_1d_array &x, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void sparseunserialize(const std::string &s_in, sparsematrix &obj); void sparseunserialize(const std::istream &s_in, sparsematrix &obj);
/************************************************************************* This function calculates vector-matrix-vector product x'*S*x, where S is symmetric matrix. Matrix S must be stored in CRS or SKS format (exception will be thrown otherwise). INPUT PARAMETERS S - sparse M*M matrix in CRS or SKS format. IsUpper - whether upper or lower triangle of S is given: * if upper triangle is given, only S[i,j] for j>=i are used, and lower triangle is ignored (it can be empty - these elements are not referenced at all). * if lower triangle is given, only S[i,j] for j<=i are used, and upper triangle is ignored. X - array[N], input vector. For performance reasons we make only quick checks - we check that array size is at least N, but we do not check for NAN's or INF's. RESULT x'*S*x NOTE: this function throws exception when called for non-CRS/SKS matrix. You must convert your matrix with SparseConvertToCRS/SKS() before using this function. -- ALGLIB PROJECT -- Copyright 27.01.2014 by Bochkanov Sergey *************************************************************************/
double sparsevsmv(const sparsematrix &s, const bool isupper, const real_1d_array &x, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates creation/initialization of the sparse matrix
        // and matrix-vector multiplication.
        //
        // First, we have to create matrix and initialize it. Matrix is initially created
        // in the Hash-Table format, which allows convenient initialization. We can modify
        // Hash-Table matrix with sparseset() and sparseadd() functions.
        //
        // NOTE: Unlike CRS format, Hash-Table representation allows you to initialize
        // elements in the arbitrary order. You may see that we initialize a[0][0] first,
        // then move to the second row, and then move back to the first row.
        //
        sparsematrix s;
        sparsecreate(2, 2, s);
        sparseset(s, 0, 0, 2.0);
        sparseset(s, 1, 1, 1.0);
        sparseset(s, 0, 1, 1.0);

        sparseadd(s, 1, 1, 4.0);

        //
        // Now S is equal to
        //   [ 2 1 ]
        //   [   5 ]
        // Lets check it by reading matrix contents with sparseget().
        // You may see that with sparseget() you may read both non-zero
        // and zero elements.
        //
        double v;
        v = sparseget(s, 0, 0);
        printf("%.2f\n", double(v)); // EXPECTED: 2.0000
        v = sparseget(s, 0, 1);
        printf("%.2f\n", double(v)); // EXPECTED: 1.0000
        v = sparseget(s, 1, 0);
        printf("%.2f\n", double(v)); // EXPECTED: 0.0000
        v = sparseget(s, 1, 1);
        printf("%.2f\n", double(v)); // EXPECTED: 5.0000

        //
        // After successful creation we can use our matrix for linear operations.
        //
        // However, there is one more thing we MUST do before using S in linear
        // operations: we have to convert it from HashTable representation (used for
        // initialization and dynamic operations) to CRS format with sparseconverttocrs()
        // call. If you omit this call, ALGLIB will generate exception on the first
        // attempt to use S in linear operations. 
        //
        sparseconverttocrs(s);

        //
        // Now S is in the CRS format and we are ready to do linear operations.
        // Lets calculate A*x for some x.
        //
        real_1d_array x = "[1,-1]";
        real_1d_array y = "[]";
        sparsemv(s, x, y);
        printf("%s\n", y.tostring(2).c_str()); // EXPECTED: [1.000,-5.000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "linalg.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // This example demonstrates creation/initialization of the sparse matrix in the
        // CRS format.
        //
        // Hash-Table format used by default is very convenient (it allows easy
        // insertion of elements, automatic memory reallocation), but has
        // significant memory and performance overhead. Insertion of one element 
        // costs hundreds of CPU cycles, and memory consumption is several times
        // higher than that of CRS.
        //
        // When you work with really large matrices and when you can tell in 
        // advance how many elements EXACTLY you need, it can be beneficial to 
        // create matrix in the CRS format from the very beginning.
        //
        // If you want to create matrix in the CRS format, you should:
        // * use sparsecreatecrs() function
        // * know row sizes in advance (number of non-zero entries in the each row)
        // * initialize matrix with sparseset() - another function, sparseadd(), is not allowed
        // * initialize elements from left to right, from top to bottom, each
        //   element is initialized only once.
        //
        sparsematrix s;
        integer_1d_array row_sizes = "[2,2,2,1]";
        sparsecreatecrs(4, 4, row_sizes, s);
        sparseset(s, 0, 0, 2.0);
        sparseset(s, 0, 1, 1.0);
        sparseset(s, 1, 1, 4.0);
        sparseset(s, 1, 2, 2.0);
        sparseset(s, 2, 2, 3.0);
        sparseset(s, 2, 3, 1.0);
        sparseset(s, 3, 3, 9.0);

        //
        // Now S is equal to
        //   [ 2 1     ]
        //   [   4 2   ]
        //   [     3 1 ]
        //   [       9 ]
        //
        // We should point that we have initialized S elements from left to right,
        // from top to bottom. CRS representation does NOT allow you to do so in
        // the different order. Try to change order of the sparseset() calls above,
        // and you will see that your program generates exception.
        //
        // We can check it by reading matrix contents with sparseget().
        // However, you should remember that sparseget() is inefficient on
        // CRS matrices (it may have to pass through all elements of the row 
        // until it finds element you need).
        //
        double v;
        v = sparseget(s, 0, 0);
        printf("%.2f\n", double(v)); // EXPECTED: 2.0000
        v = sparseget(s, 2, 3);
        printf("%.2f\n", double(v)); // EXPECTED: 1.0000

        // you may see that you can read zero elements (which are not stored) with sparseget()
        v = sparseget(s, 3, 2);
        printf("%.2f\n", double(v)); // EXPECTED: 0.0000

        //
        // After successful creation we can use our matrix for linear operations.
        // Lets calculate A*x for some x.
        //
        real_1d_array x = "[1,-1,1,-1]";
        real_1d_array y = "[]";
        sparsemv(s, x, y);
        printf("%s\n", y.tostring(2).c_str()); // EXPECTED: [1.000,-2.000,2.000,-9]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

smatrixgevd
smatrixgevdreduce
/************************************************************************* Algorithm for solving the following generalized symmetric positive-definite eigenproblem: A*x = lambda*B*x (1) or A*B*x = lambda*x (2) or B*A*x = lambda*x (3). where A is a symmetric matrix, B - symmetric positive-definite matrix. The problem is solved by reducing it to an ordinary symmetric eigenvalue problem. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrices A and B. IsUpperA - storage format of matrix A. B - symmetric positive-definite matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. IsUpperB - storage format of matrix B. ZNeeded - if ZNeeded is equal to: * 0, the eigenvectors are not returned; * 1, the eigenvectors are returned. ProblemType - if ProblemType is equal to: * 1, the following problem is solved: A*x = lambda*B*x; * 2, the following problem is solved: A*B*x = lambda*x; * 3, the following problem is solved: B*A*x = lambda*x. Output parameters: D - eigenvalues in ascending order. Array whose index ranges within [0..N-1]. Z - if ZNeeded is equal to: * 0, Z hasn't changed; * 1, Z contains eigenvectors. Array whose indexes range within [0..N-1, 0..N-1]. The eigenvectors are stored in matrix columns. It should be noted that the eigenvectors in such problems do not form an orthogonal system. Result: True, if the problem was solved successfully. False, if the error occurred during the Cholesky decomposition of matrix B (the matrix isn't positive-definite) or during the work of the iterative algorithm for solving the symmetric eigenproblem. See also the GeneralizedSymmetricDefiniteEVDReduce subroutine. -- ALGLIB -- Copyright 1.28.2006 by Bochkanov Sergey *************************************************************************/
bool smatrixgevd(const real_2d_array &a, const ae_int_t n, const bool isuppera, const real_2d_array &b, const bool isupperb, const ae_int_t zneeded, const ae_int_t problemtype, real_1d_array &d, real_2d_array &z, const xparams _xparams = alglib::xdefault);
/************************************************************************* Algorithm for reduction of the following generalized symmetric positive- definite eigenvalue problem: A*x = lambda*B*x (1) or A*B*x = lambda*x (2) or B*A*x = lambda*x (3) to the symmetric eigenvalues problem C*y = lambda*y (eigenvalues of this and the given problems are the same, and the eigenvectors of the given problem could be obtained by multiplying the obtained eigenvectors by the transformation matrix x = R*y). Here A is a symmetric matrix, B - symmetric positive-definite matrix. Input parameters: A - symmetric matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. N - size of matrices A and B. IsUpperA - storage format of matrix A. B - symmetric positive-definite matrix which is given by its upper or lower triangular part. Array whose indexes range within [0..N-1, 0..N-1]. IsUpperB - storage format of matrix B. ProblemType - if ProblemType is equal to: * 1, the following problem is solved: A*x = lambda*B*x; * 2, the following problem is solved: A*B*x = lambda*x; * 3, the following problem is solved: B*A*x = lambda*x. Output parameters: A - symmetric matrix which is given by its upper or lower triangle depending on IsUpperA. Contains matrix C. Array whose indexes range within [0..N-1, 0..N-1]. R - upper triangular or low triangular transformation matrix which is used to obtain the eigenvectors of a given problem as the product of eigenvectors of C (from the right) and matrix R (from the left). If the matrix is upper triangular, the elements below the main diagonal are equal to 0 (and vice versa). Thus, we can perform the multiplication without taking into account the internal structure (which is an easier though less effective way). Array whose indexes range within [0..N-1, 0..N-1]. IsUpperR - type of matrix R (upper or lower triangular). Result: True, if the problem was reduced successfully. False, if the error occurred during the Cholesky decomposition of matrix B (the matrix is not positive-definite). -- ALGLIB -- Copyright 1.28.2006 by Bochkanov Sergey *************************************************************************/
bool smatrixgevdreduce(real_2d_array &a, const ae_int_t n, const bool isuppera, const real_2d_array &b, const bool isupperb, const ae_int_t problemtype, real_2d_array &r, bool &isupperr, const xparams _xparams = alglib::xdefault);
spline1dfitreport
spline1dinterpolant
spline1dbuildakima
spline1dbuildakimamod
spline1dbuildcatmullrom
spline1dbuildcubic
spline1dbuildhermite
spline1dbuildhermitebuf
spline1dbuildlinear
spline1dbuildlinearbuf
spline1dbuildmonotone
spline1dcalc
spline1dconvcubic
spline1dconvdiff2cubic
spline1dconvdiffcubic
spline1ddiff
spline1dfit
spline1dgriddiff2cubic
spline1dgriddiffcubic
spline1dintegrate
spline1dlintransx
spline1dlintransy
spline1dserialize
spline1dunpack
spline1dunserialize
spline1d_d_convdiff Resampling using cubic splines
spline1d_d_cubic Cubic spline interpolation
spline1d_d_griddiff Differentiation on the grid using cubic splines
spline1d_d_linear Piecewise linear spline interpolation
spline1d_d_monotone Monotone interpolation
/************************************************************************* Spline fitting report: TerminationType completion code: * >0 for success * <0 for failure RMSError RMS error AvgError average error AvgRelError average relative error (for non-zero Y[I]) MaxError maximum error Fields below are filled by obsolete functions (Spline1DFitCubic, Spline1DFitHermite). Modern fitting functions do NOT fill these fields: TaskRCond reciprocal of task's condition number *************************************************************************/
class spline1dfitreport { public: spline1dfitreport(); spline1dfitreport(const spline1dfitreport &rhs); spline1dfitreport& operator=(const spline1dfitreport &rhs); virtual ~spline1dfitreport(); ae_int_t terminationtype; double taskrcond; double rmserror; double avgerror; double avgrelerror; double maxerror; };
/************************************************************************* 1-dimensional spline interpolant *************************************************************************/
class spline1dinterpolant { public: spline1dinterpolant(); spline1dinterpolant(const spline1dinterpolant &rhs); spline1dinterpolant& operator=(const spline1dinterpolant &rhs); virtual ~spline1dinterpolant(); };
/************************************************************************* This subroutine builds Akima spline interpolant INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] N - points count (optional): * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dbuildakima(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline1dbuildakima(const real_1d_array &x, const real_1d_array &y, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds modified Akima spline interpolant, with weights W[i]=|Delta[I]-Delta[I-1]| replaced by W[i]=|Delta[I]-Delta[I-1]|+0.5*|Delta[I]+Delta[I-1]| INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] N - points count (optional): * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dbuildakimamod(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline1dbuildakimamod(const real_1d_array &x, const real_1d_array &y, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds Catmull-Rom spline interpolant. INPUT PARAMETERS: X - spline nodes, array[0..N-1]. Y - function values, array[0..N-1]. OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundType - boundary condition type: * -1 for periodic boundary condition * 0 for parabolically terminated spline (default) Tension - tension parameter: * tension=0 corresponds to classic Catmull-Rom spline (default) * 0<tension<1 corresponds to more general form - cardinal spline OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dbuildcatmullrom(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t boundtype, const double tension, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline1dbuildcatmullrom(const real_1d_array &x, const real_1d_array &y, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds cubic spline interpolant. INPUT PARAMETERS: X - spline nodes, array[0..N-1]. Y - function values, array[0..N-1]. OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dbuildcubic(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t boundltype, const double boundl, const ae_int_t boundrtype, const double boundr, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline1dbuildcubic(const real_1d_array &x, const real_1d_array &y, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds Hermite spline interpolant. INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] D - derivatives, array[0..N-1] N - points count (optional): * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) OUTPUT PARAMETERS: C - spline interpolant. ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dbuildhermite(const real_1d_array &x, const real_1d_array &y, const real_1d_array &d, const ae_int_t n, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline1dbuildhermite(const real_1d_array &x, const real_1d_array &y, const real_1d_array &d, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds Hermite spline interpolant. Buffered version which reuses memory previously allocated in C as much as possible. -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dbuildhermitebuf(const real_1d_array &x, const real_1d_array &y, const real_1d_array &d, const ae_int_t n, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline1dbuildhermitebuf(const real_1d_array &x, const real_1d_array &y, const real_1d_array &d, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds linear spline interpolant INPUT PARAMETERS: X - spline nodes, array[0..N-1] Y - function values, array[0..N-1] N - points count (optional): * N>=2 * if given, only first N points are used to build spline * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) OUTPUT PARAMETERS: C - spline interpolant ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dbuildlinear(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline1dbuildlinear(const real_1d_array &x, const real_1d_array &y, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine builds linear spline interpolant. Buffered version of Spline1DBuildLinear() which reused memory previously allocated in C as much as possible. -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dbuildlinearbuf(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline1dbuildlinearbuf(const real_1d_array &x, const real_1d_array &y, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function builds monotone cubic Hermite interpolant. This interpolant is monotonic in [x(0),x(n-1)] and is constant outside of this interval. In case y[] form non-monotonic sequence, interpolant is piecewise monotonic. Say, for x=(0,1,2,3,4) and y=(0,1,2,1,0) interpolant will monotonically grow at [0..2] and monotonically decrease at [2..4]. INPUT PARAMETERS: X - spline nodes, array[0..N-1]. Subroutine automatically sorts points, so caller may pass unsorted array. Y - function values, array[0..N-1] N - the number of points(N>=2). OUTPUT PARAMETERS: C - spline interpolant. -- ALGLIB PROJECT -- Copyright 21.06.2012 by Bochkanov Sergey *************************************************************************/
void spline1dbuildmonotone(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline1dbuildmonotone(const real_1d_array &x, const real_1d_array &y, spline1dinterpolant &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine calculates the value of the spline at the given point X. INPUT PARAMETERS: C - spline interpolant X - point Result: S(x) -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
double spline1dcalc(const spline1dinterpolant &c, const double x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This function solves following problem: given table y[] of function values at old nodes x[] and new nodes x2[], it calculates and returns table of function values y2[] (calculated at x2[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DCalc() calls, but it can be several times faster, whilst still having the same O(N*logN) running time. When called for ordered X[], this function has O(N) running time instead of O(N*logN). INPUT PARAMETERS: X - old spline nodes Y - function values X2 - new spline nodes OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points from X/Y are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) N2 - new points count: * N2>=2 * if given, only first N2 points from X2 are used * if not given, automatically detected from X2 size OUTPUT PARAMETERS: F2 - function values at X2[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Function values are correctly reordered on return, so F2[I] is always equal to S(X2[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
void spline1dconvcubic(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t boundltype, const double boundl, const ae_int_t boundrtype, const double boundr, const real_1d_array &x2, const ae_int_t n2, real_1d_array &y2, const xparams _xparams = alglib::xdefault); void spline1dconvcubic(const real_1d_array &x, const real_1d_array &y, const real_1d_array &x2, real_1d_array &y2, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function solves following problem: given table y[] of function values at old nodes x[] and new nodes x2[], it calculates and returns table of function values y2[], first and second derivatives d2[] and dd2[] (calculated at x2[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DDiff2() calls, but it can be several times faster, whilst still having the same O(N*logN) running time. When called for ordered X[], this function has O(N) running time instead of O(N*logN). INPUT PARAMETERS: X - old spline nodes Y - function values X2 - new spline nodes OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points from X/Y are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) N2 - new points count: * N2>=2 * if given, only first N2 points from X2 are used * if not given, automatically detected from X2 size OUTPUT PARAMETERS: F2 - function values at X2[] D2 - first derivatives at X2[] DD2 - second derivatives at X2[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Function values are correctly reordered on return, so F2[I] is always equal to S(X2[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
void spline1dconvdiff2cubic(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t boundltype, const double boundl, const ae_int_t boundrtype, const double boundr, const real_1d_array &x2, const ae_int_t n2, real_1d_array &y2, real_1d_array &d2, real_1d_array &dd2, const xparams _xparams = alglib::xdefault); void spline1dconvdiff2cubic(const real_1d_array &x, const real_1d_array &y, const real_1d_array &x2, real_1d_array &y2, real_1d_array &d2, real_1d_array &dd2, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function solves following problem: given table y[] of function values at old nodes x[] and new nodes x2[], it calculates and returns table of function values y2[] and derivatives d2[] (calculated at x2[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DDiff() calls, but it can be several times faster, whilst still having the same O(N*logN) running time. When called for ordered X[], this function has O(N) running time instead of O(N*logN). INPUT PARAMETERS: X - old spline nodes Y - function values X2 - new spline nodes OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points from X/Y are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) N2 - new points count: * N2>=2 * if given, only first N2 points from X2 are used * if not given, automatically detected from X2 size OUTPUT PARAMETERS: F2 - function values at X2[] D2 - first derivatives at X2[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Function values are correctly reordered on return, so F2[I] is always equal to S(X2[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
void spline1dconvdiffcubic(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t boundltype, const double boundl, const ae_int_t boundrtype, const double boundr, const real_1d_array &x2, const ae_int_t n2, real_1d_array &y2, real_1d_array &d2, const xparams _xparams = alglib::xdefault); void spline1dconvdiffcubic(const real_1d_array &x, const real_1d_array &y, const real_1d_array &x2, real_1d_array &y2, real_1d_array &d2, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine differentiates the spline. INPUT PARAMETERS: C - spline interpolant. X - point Result: S - S(x) DS - S'(x) D2S - S''(x) -- ALGLIB PROJECT -- Copyright 24.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1ddiff(const spline1dinterpolant &c, const double x, double &s, double &ds, double &d2s, const xparams _xparams = alglib::xdefault);
/************************************************************************* Fitting by the smoothing (penalized) cubic spline. This function approximates N scattered points (some of X[] may be equal to each other) by the cubic spline with M equidistant nodes spanning interval [min(x),max(x)]. The problem is regularized by adding nonlinearity penalty to the usual least squares penalty function: MERIT_FUNC = F_LS + F_NL where F_LS is a least squares error term, and F_NL is a nonlinearity penalty which is roughly proportional to LambdaNS*integral{ S''(x)^2*dx }. Algorithm applies automatic renormalization of F_NL which makes penalty term roughly invariant to scaling of X[] and changes in M. This function is a new edition of penalized regression spline fitting, a fast and compact one which needs much less resources that its previous version: just O(maxMN) memory and O(maxMN) time. NOTE: it is OK to run this function with both M<<N and M>>N; say, it is possible to process 100 points with 1000-node spline. INPUT PARAMETERS: X - points, array[0..N-1]. Y - function values, array[0..N-1]. N - number of points (optional): * N>0 * if given, only first N elements of X/Y are processed * if not given, automatically determined from lengths M - number of basis functions ( = number_of_nodes), M>=4. LambdaNS - LambdaNS>=0, regularization constant passed by user. It penalizes nonlinearity in the regression spline. Possible values to start from are 0.00001, 0.1, 1 OUTPUT PARAMETERS: S - spline interpolant. Rep - Following fields are set: * TerminationType set to 1 * RMSError rms error on the (X,Y). * AvgError average error on the (X,Y). * AvgRelError average relative error on the non-zero Y * MaxError maximum error -- ALGLIB PROJECT -- Copyright 10.04.2023 by Bochkanov Sergey *************************************************************************/
void spline1dfit(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t m, const double lambdans, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault); void spline1dfit(const real_1d_array &x, const real_1d_array &y, const ae_int_t m, const double lambdans, spline1dinterpolant &s, spline1dfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function solves following problem: given table y[] of function values at nodes x[], it calculates and returns tables of first and second function derivatives d1[] and d2[] (calculated at the same nodes x[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DDiff2() calls, but it can be several times faster, whilst still having the same O(N*logN) running time. When called for ordered X[], this function has O(N) running time instead of O(N*logN). INPUT PARAMETERS: X - spline nodes Y - function values OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) OUTPUT PARAMETERS: D1 - S' values at X[] D2 - S'' values at X[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Derivative values are correctly reordered on return, so D[I] is always equal to S'(X[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
void spline1dgriddiff2cubic(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t boundltype, const double boundl, const ae_int_t boundrtype, const double boundr, real_1d_array &d1, real_1d_array &d2, const xparams _xparams = alglib::xdefault); void spline1dgriddiff2cubic(const real_1d_array &x, const real_1d_array &y, real_1d_array &d1, real_1d_array &d2, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function solves following problem: given table y[] of function values at nodes x[], it calculates and returns table of function derivatives d[] (calculated at the same nodes x[]). This function yields same result as Spline1DBuildCubic() call followed by sequence of Spline1DDiff() calls, but it can be several times faster, whilst still having the same O(N*logN) running time. When called for ordered X[], this function has O(N) running time instead of O(N*logN). INPUT PARAMETERS: X - spline nodes Y - function values OPTIONAL PARAMETERS: N - points count: * N>=2 * if given, only first N points are used * if not given, automatically detected from X/Y sizes (len(X) must be equal to len(Y)) BoundLType - boundary condition type for the left boundary BoundL - left boundary condition (first or second derivative, depending on the BoundLType) BoundRType - boundary condition type for the right boundary BoundR - right boundary condition (first or second derivative, depending on the BoundRType) OUTPUT PARAMETERS: D - derivative values at X[] ORDER OF POINTS Subroutine automatically sorts points, so caller may pass unsorted array. Derivative values are correctly reordered on return, so D[I] is always equal to S'(X[I]) independently of points order. SETTING BOUNDARY VALUES: The BoundLType/BoundRType parameters can have the following values: * -1, which corresonds to the periodic (cyclic) boundary conditions. In this case: * both BoundLType and BoundRType must be equal to -1. * BoundL/BoundR are ignored * Y[last] is ignored (it is assumed to be equal to Y[first]). * 0, which corresponds to the parabolically terminated spline (BoundL and/or BoundR are ignored). * 1, which corresponds to the first derivative boundary condition * 2, which corresponds to the second derivative boundary condition * by default, BoundType=0 is used PROBLEMS WITH PERIODIC BOUNDARY CONDITIONS: Problems with periodic boundary conditions have Y[first_point]=Y[last_point]. However, this subroutine doesn't require you to specify equal values for the first and last points - it automatically forces them to be equal by copying Y[first_point] (corresponds to the leftmost, minimal X[]) to Y[last_point]. However it is recommended to pass consistent values of Y[], i.e. to make Y[first_point]=Y[last_point]. -- ALGLIB PROJECT -- Copyright 03.09.2010 by Bochkanov Sergey *************************************************************************/
void spline1dgriddiffcubic(const real_1d_array &x, const real_1d_array &y, const ae_int_t n, const ae_int_t boundltype, const double boundl, const ae_int_t boundrtype, const double boundr, real_1d_array &d, const xparams _xparams = alglib::xdefault); void spline1dgriddiffcubic(const real_1d_array &x, const real_1d_array &y, real_1d_array &d, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine integrates the spline. INPUT PARAMETERS: C - spline interpolant. X - right bound of the integration interval [a, x], here 'a' denotes min(x[]) Result: integral(S(t)dt,a,x) -- ALGLIB PROJECT -- Copyright 23.06.2007 by Bochkanov Sergey *************************************************************************/
double spline1dintegrate(const spline1dinterpolant &c, const double x, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine performs linear transformation of the spline argument. INPUT PARAMETERS: C - spline interpolant. A, B- transformation coefficients: x = A*t + B Result: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dlintransx(spline1dinterpolant &c, const double a, const double b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine performs linear transformation of the spline. INPUT PARAMETERS: C - spline interpolant. A, B- transformation coefficients: S2(x) = A*S(x) + B Result: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dlintransy(spline1dinterpolant &c, const double a, const double b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void spline1dserialize(const spline1dinterpolant &obj, std::string &s_out); void spline1dserialize(const spline1dinterpolant &obj, std::ostream &s_out);
/************************************************************************* This subroutine unpacks the spline into the coefficients table. INPUT PARAMETERS: C - spline interpolant. X - point OUTPUT PARAMETERS: Tbl - coefficients table, unpacked format, array[0..N-2, 0..5]. For I = 0...N-2: Tbl[I,0] = X[i] Tbl[I,1] = X[i+1] Tbl[I,2] = C0 Tbl[I,3] = C1 Tbl[I,4] = C2 Tbl[I,5] = C3 On [x[i], x[i+1]] spline is equals to: S(x) = C0 + C1*t + C2*t^2 + C3*t^3 t = x-x[i] NOTE: You can rebuild spline with Spline1DBuildHermite() function, which accepts as inputs function values and derivatives at nodes, which are easy to calculate when you have coefficients. -- ALGLIB PROJECT -- Copyright 29.06.2007 by Bochkanov Sergey *************************************************************************/
void spline1dunpack(const spline1dinterpolant &c, ae_int_t &n, real_2d_array &tbl, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void spline1dunserialize(const std::string &s_in, spline1dinterpolant &obj); void spline1dunserialize(const std::istream &s_in, spline1dinterpolant &obj);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use cubic spline to do resampling, i.e. having
        // values of f(x)=x^2 sampled at 5 equidistant nodes on [-1,+1]
        // we calculate values/derivatives of cubic spline on 
        // another grid (equidistant with 9 nodes on [-1,+1])
        // WITHOUT CONSTRUCTION OF SPLINE OBJECT.
        //
        // There are efficient functions spline1dconvcubic(),
        // spline1dconvdiffcubic() and spline1dconvdiff2cubic() 
        // for such calculations.
        //
        // We use default boundary conditions ("parabolically terminated
        // spline") because cubic spline built with such boundary conditions 
        // will exactly reproduce any quadratic f(x).
        //
        // Actually, we could use natural conditions, but we feel that 
        // spline which exactly reproduces f() will show us more 
        // understandable results.
        //
        real_1d_array x_old = "[-1.0,-0.5,0.0,+0.5,+1.0]";
        real_1d_array y_old = "[+1.0,0.25,0.0,0.25,+1.0]";
        real_1d_array x_new = "[-1.00,-0.75,-0.50,-0.25,0.00,+0.25,+0.50,+0.75,+1.00]";
        real_1d_array y_new;
        real_1d_array d1_new;
        real_1d_array d2_new;

        //
        // First, conversion without differentiation.
        //
        //
        spline1dconvcubic(x_old, y_old, x_new, y_new);
        printf("%s\n", y_new.tostring(3).c_str()); // EXPECTED: [1.0000, 0.5625, 0.2500, 0.0625, 0.0000, 0.0625, 0.2500, 0.5625, 1.0000]

        //
        // Then, conversion with differentiation (first derivatives only)
        //
        //
        spline1dconvdiffcubic(x_old, y_old, x_new, y_new, d1_new);
        printf("%s\n", y_new.tostring(3).c_str()); // EXPECTED: [1.0000, 0.5625, 0.2500, 0.0625, 0.0000, 0.0625, 0.2500, 0.5625, 1.0000]
        printf("%s\n", d1_new.tostring(3).c_str()); // EXPECTED: [-2.0, -1.5, -1.0, -0.5, 0.0, 0.5, 1.0, 1.5, 2.0]

        //
        // Finally, conversion with first and second derivatives
        //
        //
        spline1dconvdiff2cubic(x_old, y_old, x_new, y_new, d1_new, d2_new);
        printf("%s\n", y_new.tostring(3).c_str()); // EXPECTED: [1.0000, 0.5625, 0.2500, 0.0625, 0.0000, 0.0625, 0.2500, 0.5625, 1.0000]
        printf("%s\n", d1_new.tostring(3).c_str()); // EXPECTED: [-2.0, -1.5, -1.0, -0.5, 0.0, 0.5, 1.0, 1.5, 2.0]
        printf("%s\n", d2_new.tostring(3).c_str()); // EXPECTED: [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use cubic spline to interpolate f(x)=x^2 sampled 
        // at 5 equidistant nodes on [-1,+1].
        //
        // First, we use default boundary conditions ("parabolically terminated
        // spline") because cubic spline built with such boundary conditions 
        // will exactly reproduce any quadratic f(x).
        //
        // Then we try to use natural boundary conditions
        //     d2S(-1)/dx^2 = 0.0
        //     d2S(+1)/dx^2 = 0.0
        // and see that such spline interpolated f(x) with small error.
        //
        real_1d_array x = "[-1.0,-0.5,0.0,+0.5,+1.0]";
        real_1d_array y = "[+1.0,0.25,0.0,0.25,+1.0]";
        double t = 0.25;
        double v;
        spline1dinterpolant s;
        ae_int_t natural_bound_type = 2;
        //
        // Test exact boundary conditions: build S(x), calculare S(0.25)
        // (almost same as original function)
        //
        spline1dbuildcubic(x, y, s);
        v = spline1dcalc(s, t);
        printf("%.4f\n", double(v)); // EXPECTED: 0.0625

        //
        // Test natural boundary conditions: build S(x), calculare S(0.25)
        // (small interpolation error)
        //
        spline1dbuildcubic(x, y, 5, natural_bound_type, 0.0, natural_bound_type, 0.0, s);
        v = spline1dcalc(s, t);
        printf("%.3f\n", double(v)); // EXPECTED: 0.0580
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use cubic spline to do grid differentiation, i.e. having
        // values of f(x)=x^2 sampled at 5 equidistant nodes on [-1,+1]
        // we calculate derivatives of cubic spline at nodes WITHOUT
        // CONSTRUCTION OF SPLINE OBJECT.
        //
        // There are efficient functions spline1dgriddiffcubic() and
        // spline1dgriddiff2cubic() for such calculations.
        //
        // We use default boundary conditions ("parabolically terminated
        // spline") because cubic spline built with such boundary conditions 
        // will exactly reproduce any quadratic f(x).
        //
        // Actually, we could use natural conditions, but we feel that 
        // spline which exactly reproduces f() will show us more 
        // understandable results.
        //
        real_1d_array x = "[-1.0,-0.5,0.0,+0.5,+1.0]";
        real_1d_array y = "[+1.0,0.25,0.0,0.25,+1.0]";
        real_1d_array d1;
        real_1d_array d2;

        //
        // We calculate first derivatives: they must be equal to 2*x
        //
        spline1dgriddiffcubic(x, y, d1);
        printf("%s\n", d1.tostring(3).c_str()); // EXPECTED: [-2.0, -1.0, 0.0, +1.0, +2.0]

        //
        // Now test griddiff2, which returns first AND second derivatives.
        // First derivative is 2*x, second is equal to 2.0
        //
        spline1dgriddiff2cubic(x, y, d1, d2);
        printf("%s\n", d1.tostring(3).c_str()); // EXPECTED: [-2.0, -1.0, 0.0, +1.0, +2.0]
        printf("%s\n", d2.tostring(3).c_str()); // EXPECTED: [ 2.0,  2.0, 2.0,  2.0,  2.0]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use piecewise linear spline to interpolate f(x)=x^2 sampled 
        // at 5 equidistant nodes on [-1,+1].
        //
        real_1d_array x = "[-1.0,-0.5,0.0,+0.5,+1.0]";
        real_1d_array y = "[+1.0,0.25,0.0,0.25,+1.0]";
        double t = 0.25;
        double v;
        spline1dinterpolant s;

        // build spline
        spline1dbuildlinear(x, y, s);

        // calculate S(0.25) - it is quite different from 0.25^2=0.0625
        v = spline1dcalc(s, t);
        printf("%.4f\n", double(v)); // EXPECTED: 0.125
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Spline built witn spline1dbuildcubic() can be non-monotone even when
        // Y-values form monotone sequence. Say, for x=[0,1,2] and y=[0,1,1]
        // cubic spline will monotonically grow until x=1.5 and then start
        // decreasing.
        //
        // That's why ALGLIB provides special spline construction function
        // which builds spline which preserves monotonicity of the original
        // dataset.
        //
        // NOTE: in case original dataset is non-monotonic, ALGLIB splits it
        // into monotone subsequences and builds piecewise monotonic spline.
        //
        real_1d_array x = "[0,1,2]";
        real_1d_array y = "[0,1,1]";
        spline1dinterpolant s;

        // build spline
        spline1dbuildmonotone(x, y, s);

        // calculate S at x = [-0.5, 0.0, 0.5, 1.0, 1.5, 2.0]
        // you may see that spline is really monotonic
        double v;
        v = spline1dcalc(s, -0.5);
        printf("%.4f\n", double(v)); // EXPECTED: 0.0000
        v = spline1dcalc(s, 0.0);
        printf("%.4f\n", double(v)); // EXPECTED: 0.0000
        v = spline1dcalc(s, +0.5);
        printf("%.4f\n", double(v)); // EXPECTED: 0.5000
        v = spline1dcalc(s, 1.0);
        printf("%.4f\n", double(v)); // EXPECTED: 1.0000
        v = spline1dcalc(s, 1.5);
        printf("%.4f\n", double(v)); // EXPECTED: 1.0000
        v = spline1dcalc(s, 2.0);
        printf("%.4f\n", double(v)); // EXPECTED: 1.0000
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

spline2dbuilder
spline2dfitreport
spline2dinterpolant
spline2dbuildbicubic
spline2dbuildbicubicmissing
spline2dbuildbicubicmissingbuf
spline2dbuildbicubicv
spline2dbuildbicubicvbuf
spline2dbuildbilinear
spline2dbuildbilinearmissing
spline2dbuildbilinearmissingbuf
spline2dbuildbilinearv
spline2dbuildbilinearvbuf
spline2dbuildclampedv
spline2dbuildercreate
spline2dbuildersetalgoblocklls
spline2dbuildersetalgofastddm
spline2dbuildersetalgonaivells
spline2dbuildersetarea
spline2dbuildersetareaauto
spline2dbuildersetconstterm
spline2dbuildersetgrid
spline2dbuildersetlinterm
spline2dbuildersetpoints
spline2dbuildersetuserterm
spline2dbuildersetzeroterm
spline2dbuildhermitev
spline2dcalc
spline2dcalcv
spline2dcalcvbuf
spline2dcalcvi
spline2dcopy
spline2ddiff
spline2ddiff2
spline2ddiff2vi
spline2ddiffvi
spline2dfit
spline2dlintransf
spline2dlintransxy
spline2dresamplebicubic
spline2dresamplebilinear
spline2dserialize
spline2dunpack
spline2dunpackv
spline2dunserialize
spline2d_bicubic Bilinear spline interpolation
spline2d_bilinear Bilinear spline interpolation
spline2d_copytrans Copy and transform
spline2d_fit_blocklls Fitting bicubic spline to irregular data
spline2d_unpack Unpacking bilinear spline
spline2d_vector Copy and transform
/************************************************************************* Nonlinear least squares solver used to fit 2D splines to data *************************************************************************/
class spline2dbuilder { public: spline2dbuilder(); spline2dbuilder(const spline2dbuilder &rhs); spline2dbuilder& operator=(const spline2dbuilder &rhs); virtual ~spline2dbuilder(); };
/************************************************************************* Spline 2D fitting report: rmserror RMS error avgerror average error maxerror maximum error r2 coefficient of determination, R-squared, 1-RSS/TSS *************************************************************************/
class spline2dfitreport { public: spline2dfitreport(); spline2dfitreport(const spline2dfitreport &rhs); spline2dfitreport& operator=(const spline2dfitreport &rhs); virtual ~spline2dfitreport(); double rmserror; double avgerror; double maxerror; double r2; };
/************************************************************************* 2-dimensional spline inteprolant *************************************************************************/
class spline2dinterpolant { public: spline2dinterpolant(); spline2dinterpolant(const spline2dinterpolant &rhs); spline2dinterpolant& operator=(const spline2dinterpolant &rhs); virtual ~spline2dinterpolant(); };
/************************************************************************* This subroutine was deprecated in ALGLIB 3.6.0 We recommend you to switch to Spline2DBuildBicubicV(), which is more flexible and accepts its arguments in more convenient order. -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbicubic(const real_1d_array &x, const real_1d_array &y, const real_2d_array &f, const ae_int_t m, const ae_int_t n, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds bicubic vector-valued spline, with some spline cells being missing due to missing nodes. This function produces C2-continuous spline, i.e. the has smooth first and second derivatives both inside spline cells and at the boundaries. When the node (i,j) is missing, it means that: a) we don't have function value at this point (elements of F[] are ignored), and b) we don't need spline value at cells adjacent to the node (i,j), i.e. up to 4 spline cells will be dropped. An attempt to compute spline value at the missing cell will return NAN. It is important to understand that this subroutine does NOT support interpolation on scattered grids. It allows us to drop some nodes, but at the cost of making a "hole in the spline" around this point. If you want function that can "fill the gap", use RBF or another scattered interpolation method. The intended usage for this subroutine are regularly sampled, but non-rectangular datasets. Input parameters: X - spline abscissas, array[0..N-1] Y - spline ordinates, array[0..M-1] F - function values, array[0..M*N*D-1]: * first D elements store D values at (X[0],Y[0]) * next D elements store D values at (X[1],Y[0]) * general form - D function values at (X[i],Y[j]) are stored at F[D*(J*N+I)...D*(J*N+I)+D-1]. * missing values are ignored Missing array[M*N], Missing[J*N+I]=True means that corresponding entries of F[] are missing nodes. M,N - grid size, M>=2, N>=2 D - vector dimension, D>=1 Output parameters: C - spline interpolant -- ALGLIB PROJECT -- Copyright 27.06.2022 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbicubicmissing(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &f, const boolean_1d_array &missing, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds bicubic vector-valued spline, with some spline cells being missing due to missing nodes. Buffered version of Spline2DBuildBicubicMissing() which reuses memory previously allocated in C as much as possible. -- ALGLIB PROJECT -- Copyright 27.06.2022 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbicubicmissingbuf(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &f, const boolean_1d_array &missing, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds a bicubic vector-valued spline using parabolically terminated end conditions. This function produces a C2-continuous spline, i.e. the spline has smooth first and second derivatives both inside spline cells and at their boundaries. INPUT PARAMETERS: X - spline abscissas, array[N] N - N>=2: * if not given, automatically determined as len(X) * if given, only leading N elements of X are used Y - spline ordinates, array[M] M - M>=2: * if not given, automatically determined as len(Y) * if given, only leading M elements of Y are used F - function values, array[M*N*D]: * first D elements store D values at (X[0],Y[0]) * next D elements store D values at (X[1],Y[0]) * general form - D function values at (X[i],Y[j]) are stored at F[D*(J*N+I)...D*(J*N+I)+D-1]. D - vector dimension, D>=1: * D=1 means scalar-valued bicubic spline * D>1 means vector-valued bicubic spline OUTPUT PARAMETERS: C - spline interpolant -- ALGLIB PROJECT -- Copyright 2012-2023 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbicubicv(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &f, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline2dbuildbicubicv(const real_1d_array &x, const real_1d_array &y, const real_1d_array &f, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine builds bicubic vector-valued spline. Buffered version of Spline2DBuildBicubicV() which reuses memory previously allocated in C as much as possible. -- ALGLIB PROJECT -- Copyright 16.04.2012 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbicubicvbuf(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &f, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine was deprecated in ALGLIB 3.6.0 We recommend you to switch to Spline2DBuildBilinearV(), which is more flexible and accepts its arguments in more convenient order. -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbilinear(const real_1d_array &x, const real_1d_array &y, const real_2d_array &f, const ae_int_t m, const ae_int_t n, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds bilinear vector-valued spline, with some spline cells being missing due to missing nodes. This function produces C0-continuous spline, i.e. the spline itself is continuous, however its first and second derivatives have discontinuities at the spline cell boundaries. When the node (i,j) is missing, it means that: a) we don't have function value at this point (elements of F[] are ignored), and b) we don't need spline value at cells adjacent to the node (i,j), i.e. up to 4 spline cells will be dropped. An attempt to compute spline value at the missing cell will return NAN. It is important to understand that this subroutine does NOT support interpolation on scattered grids. It allows us to drop some nodes, but at the cost of making a "hole in the spline" around this point. If you want function that can "fill the gap", use RBF or another scattered interpolation method. The intended usage for this subroutine are regularly sampled, but non-rectangular datasets. Input parameters: X - spline abscissas, array[0..N-1] Y - spline ordinates, array[0..M-1] F - function values, array[0..M*N*D-1]: * first D elements store D values at (X[0],Y[0]) * next D elements store D values at (X[1],Y[0]) * general form - D function values at (X[i],Y[j]) are stored at F[D*(J*N+I)...D*(J*N+I)+D-1]. * missing values are ignored Missing array[M*N], Missing[J*N+I]=True means that corresponding entries of F[] are missing nodes. M,N - grid size, M>=2, N>=2 D - vector dimension, D>=1 Output parameters: C - spline interpolant -- ALGLIB PROJECT -- Copyright 27.06.2022 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbilinearmissing(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &f, const boolean_1d_array &missing, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds bilinear vector-valued spline, with some spline cells being missing due to missing nodes. Buffered version of Spline2DBuildBilinearMissing() which reuses memory previously allocated in C as much as possible. -- ALGLIB PROJECT -- Copyright 27.06.2022 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbilinearmissingbuf(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &f, const boolean_1d_array &missing, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds bilinear vector-valued spline. This function produces C0-continuous spline, i.e. the spline itself is continuous, however its first and second derivatives have discontinuities at the spline cell boundaries. Input parameters: X - spline abscissas, array[0..N-1] Y - spline ordinates, array[0..M-1] F - function values, array[0..M*N*D-1]: * first D elements store D values at (X[0],Y[0]) * next D elements store D values at (X[1],Y[0]) * general form - D function values at (X[i],Y[j]) are stored at F[D*(J*N+I)...D*(J*N+I)+D-1]. M,N - grid size, M>=2, N>=2 D - vector dimension, D>=1 Output parameters: C - spline interpolant -- ALGLIB PROJECT -- Copyright 16.04.2012 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbilinearv(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &f, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine builds bilinear vector-valued spline. Buffered version of Spline2DBuildBilinearV() which reuses memory previously allocated in C as much as possible. -- ALGLIB PROJECT -- Copyright 16.04.2012 by Bochkanov Sergey *************************************************************************/
void spline2dbuildbilinearvbuf(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &f, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds a bicubic vector-valued spline using clamped boundary conditions: * spline values at the grid nodes are specified * boundary conditions for first, second derivatives or for parabolic termination at four boundaries (bottom y=min(Y[]), top y=max(Y[]), left x=min(X[]), right x=max(X[])) are specified * mixed derivatives at corners are specified * it is possible to have different boundary conditions for different boundaries (first derivatives along one boundary, second derivatives along other one, parabolic termination along the rest and so on) * it is possible to have either a scalar (D=1) or a vector-valued spline This function produces a C2-continuous spline, i.e. the spline has smooth first and second derivatives both inside spline cells and at their boundaries. INPUT PARAMETERS: X - spline abscissas, array[N]. Can be unsorted, the function will sort it together with boundary conditions and F[] array (the same set of permutations will be applied to X[] and F[]). N - N>=2: * if not given, automatically determined as len(X) * if given, only leading N elements of X are used Y - spline ordinates, array[M]. Can be unsorted, the function will sort it together with boundary conditions and F[] array (the same set of permutations will be applied to X[] and F[]). M - M>=2: * if not given, automatically determined as len(Y) * if given, only leading M elements of Y are used BndBtm - array[D*N], boundary conditions at the bottom boundary of the interpolation area (corresponds to y=min(Y[]): * if BndTypeBtm=0, the spline has a 'parabolic termination' boundary condition across that specific boundary. In this case BndBtm is not even referenced by the function and can be unallocated. * otherwise contains derivatives with respect to X * if BndTypeBtm=1, first derivatives are given * if BndTypeBtm=2, second derivatives are given * first D entries store derivatives at x=X[0], y=minY, subsequent D entries store derivatives at x=X[1], y=minY and so on BndTop - array[D*N], boundary conditions at the top boundary of the interpolation area (corresponds to y=max(Y[]): * if BndTypeTop=0, the spline has a 'parabolic termination' boundary condition across that specific boundary. In this case BndTop is not even referenced by the function and can be unallocated. * otherwise contains derivatives with respect to X * if BndTypeTop=1, first derivatives are given * if BndTypeTop=2, second derivatives are given * first D entries store derivatives at x=X[0], y=maxY, subsequent D entries store derivatives at x=X[1], y=maxY and so on BndLft - array[D*M], boundary conditions at the left boundary of the interpolation area (corresponds to x=min(X[]): * if BndTypeLft=0, the spline has a 'parabolic termination' boundary condition across that specific boundary. In this case BndLft is not even referenced by the function and can be unallocated. * otherwise contains derivatives with respect to Y * if BndTypeLft=1, first derivatives are given * if BndTypeLft=2, second derivatives are given * first D entries store derivatives at x=minX, y=Y[0], subsequent D entries store derivatives at x=minX, y=Y[1] and so on BndRgt - array[D*M], boundary conditions at the right boundary of the interpolation area (corresponds to x=max(X[]): * if BndTypeRgt=0, the spline has a 'parabolic termination' boundary condition across that specific boundary. In this case BndRgt is not even referenced by the function and can be unallocated. * otherwise contains derivatives with respect to Y * if BndTypeRgt=1, first derivatives are given * if BndTypeRgt=2, second derivatives are given * first D entries store derivatives at x=maxX, y=Y[0], subsequent D entries store derivatives at x=maxX, y=Y[1] and so on MixedD - array[D*4], mixed derivatives at 4 corners of the interpolation area: * derivative order depends on the order of boundary conditions (bottom/top and left/right) intersecting at that corner: ** for BndType(Btm|Top)=BndType(Lft|Rgt)=1 user has to provide d2S/dXdY ** for BndType(Btm|Top)=BndType(Lft|Rgt)=2 user has to provide d4S/(dX^2*dY^2) ** for BndType(Btm|Top)=1, BndType(Lft|Rgt)=2 user has to provide d3S/(dX^2*dY) ** for BndType(Btm|Top)=2, BndType(Lft|Rgt)=1 user has to provide d3S/(dX*dY^2) ** if one of the intersecting bounds has 'parabolic termination' condition, this specific mixed derivative is not used * first D entries store derivatives at the bottom left corner x=min(X[]), y=min(Y[]) * subsequent D entries store derivatives at the bottom right corner x=max(X[]), y=min(Y[]) * subsequent D entries store derivatives at the top left corner x=min(X[]), y=max(Y[]) * subsequent D entries store derivatives at the top right corner x=max(X[]), y=max(Y[]) * if all bounds have 'parabolic termination' condition, MixedD[] is not referenced at all and can be unallocated. F - function values, array[M*N*D]: * first D elements store D values at (X[0],Y[0]) * next D elements store D values at (X[1],Y[0]) * general form - D function values at (X[i],Y[j]) are stored at F[D*(J*N+I)...D*(J*N+I)+D-1]. D - vector dimension, D>=1: * D=1 means scalar-valued bicubic spline * D>1 means vector-valued bicubic spline OUTPUT PARAMETERS: C - spline interpolant -- ALGLIB PROJECT -- Copyright 2012-2023 by Bochkanov Sergey *************************************************************************/
void spline2dbuildclampedv(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &bndbtm, const ae_int_t bndtypebtm, const real_1d_array &bndtop, const ae_int_t bndtypetop, const real_1d_array &bndlft, const ae_int_t bndtypelft, const real_1d_array &bndrgt, const ae_int_t bndtypergt, const real_1d_array &mixedd, const real_1d_array &f, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline2dbuildclampedv(const real_1d_array &x, const real_1d_array &y, const real_1d_array &bndbtm, const ae_int_t bndtypebtm, const real_1d_array &bndtop, const ae_int_t bndtypetop, const real_1d_array &bndlft, const ae_int_t bndtypelft, const real_1d_array &bndrgt, const ae_int_t bndtypergt, const real_1d_array &mixedd, const real_1d_array &f, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine creates least squares solver used to fit 2D splines to irregularly sampled (scattered) data. Solver object is used to perform spline fits as follows: * solver object is created with spline2dbuildercreate() function * dataset is added with spline2dbuildersetpoints() function * fit area is chosen: * spline2dbuildersetarea() - for user-defined area * spline2dbuildersetareaauto() - for automatically chosen area * number of grid nodes is chosen with spline2dbuildersetgrid() * prior term is chosen with one of the following functions: * spline2dbuildersetlinterm() to set linear prior * spline2dbuildersetconstterm() to set constant prior * spline2dbuildersetzeroterm() to set zero prior * spline2dbuildersetuserterm() to set user-defined constant prior * solver algorithm is chosen with either: * spline2dbuildersetalgoblocklls() - BlockLLS algorithm, medium-scale problems * spline2dbuildersetalgofastddm() - FastDDM algorithm, large-scale problems * finally, fitting itself is performed with spline2dfit() function. Most of the steps above can be omitted, solver is configured with good defaults. The minimum is to call: * spline2dbuildercreate() to create solver object * spline2dbuildersetpoints() to specify dataset * spline2dbuildersetgrid() to tell how many nodes you need * spline2dfit() to perform fit ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: D - positive number, number of Y-components: D=1 for simple scalar fit, D>1 for vector-valued spline fitting. OUTPUT PARAMETERS: S - solver object -- ALGLIB PROJECT -- Copyright 29.01.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildercreate(const ae_int_t d, spline2dbuilder &state, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function allows you to choose least squares solver used to perform fitting. This function sets solver algorithm to "BlockLLS", which performs least squares fitting with fast sparse direct solver, with optional nonsmoothness penalty being applied. This solver produces C2-continuous spline. Nonlinearity penalty has the following form: [ ] P() ~ Lambda* integral[ (d2S/dx2)^2 + 2*(d2S/dxdy)^2 + (d2S/dy2)^2 ]dxdy [ ] here integral is calculated over entire grid, and "~" means "proportional" because integral is normalized after calcilation. Extremely large values of Lambda result in linear fit being performed. NOTE: this algorithm is the most robust and controllable one, but it is limited by 512x512 grids and (say) up to 1.000.000 points. However, ALGLIB has one more spline solver: FastDDM algorithm, which is intended for really large-scale problems (in 10M-100M range). FastDDM algorithm also has better parallelism properties. More information on BlockLLS solver: * memory requirements: ~[32*K^3+256*NPoints] bytes for KxK grid with NPoints-sized dataset * serial running time: O(K^4+NPoints) * parallelism potential: limited. You may get some sublinear gain when working with large grids (K's in 256..512 range) ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: S - spline 2D builder object LambdaNS- non-negative value: * positive value means that some smoothing is applied * zero value means that no smoothing is applied, and corresponding entries of design matrix are numerically zero and dropped from consideration. -- ALGLIB -- Copyright 05.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetalgoblocklls(spline2dbuilder &state, const double lambdans, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function allows you to choose least squares solver used to perform fitting. This function sets solver algorithm to "FastDDM", which performs fast parallel fitting by splitting problem into smaller chunks and merging results together. Unlike BlockLLS, this solver produces merely C1 continuous models because domain decomposition part disrupts C2 continuity. This solver is optimized for large-scale problems, starting from 256x256 grids, and up to 10000x10000 grids. Of course, it will work for smaller grids too. More detailed description of the algorithm is given below: * algorithm generates hierarchy of nested grids, ranging from ~16x16 (topmost "layer" of the model) to ~KX*KY one (final layer). Upper layers model global behavior of the function, lower layers are used to model fine details. Moving from layer to layer doubles grid density. * fitting is started from topmost layer, subsequent layers are fitted using residuals from previous ones. * user may choose to skip generation of upper layers and generate only a few bottom ones, which will result in much better performance and parallelization efficiency, at the cost of algorithm inability to "patch" large holes in the dataset. * every layer is regularized using progressively increasing regularization coefficient; thus, increasing LambdaV penalizes fine details first, leaving lower frequencies almost intact for a while. * after fitting is done, all layers are merged together into one bicubic spline IMPORTANT: regularization coefficient used by this solver is different from the one used by BlockLLS. Latter utilizes nonlinearity penalty, which is global in nature (large regularization results in global linear trend being extracted); this solver uses another, localized form of penalty, which is suitable for parallel processing. Notes on memory and performance: * memory requirements: most memory is consumed during modeling of the higher layers; ~[512*NPoints] bytes is required for a model with full hierarchy of grids being generated. However, if you skip a few topmost layers, you will get nearly constant (wrt. points count and grid size) memory consumption. * serial running time: O(K*K)+O(NPoints) for a KxK grid * parallelism potential: good. You may get nearly linear speed-up when performing fitting with just a few layers. Adding more layers results in model becoming more global, which somewhat reduces efficiency of the parallel code. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: S - spline 2D builder object NLayers - number of layers in the model: * NLayers>=1 means that up to chosen number of bottom layers is fitted * NLayers=0 means that maximum number of layers is chosen (according to current grid size) * NLayers<=-1 means that up to |NLayers| topmost layers is skipped Recommendations: * good "default" value is 2 layers * you may need more layers, if your dataset is very irregular and you want to "patch" large holes. For a grid step H (equal to AreaWidth/GridSize) you may expect that last layer reproduces variations at distance H (and can patch holes that wide); that higher layers operate at distances 2*H, 4*H, 8*H and so on. * good value for "bullletproof" mode is NLayers=0, which results in complete hierarchy of layers being generated. LambdaV - regularization coefficient, chosen in such a way that it penalizes bottom layers (fine details) first. LambdaV>=0, zero value means that no penalty is applied. -- ALGLIB -- Copyright 05.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetalgofastddm(spline2dbuilder &state, const ae_int_t nlayers, const double lambdav, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function allows you to choose least squares solver used to perform fitting. This function sets solver algorithm to "NaiveLLS". IMPORTANT: NaiveLLS is NOT intended to be used in real life code! This algorithm solves problem by generated dense (K^2)x(K^2+NPoints) matrix and solves linear least squares problem with dense solver. It is here just to test BlockLLS against reference solver (and maybe for someone trying to compare well optimized solver against straightforward approach to the LLS problem). More information on naive LLS solver: * memory requirements: ~[8*K^4+256*NPoints] bytes for KxK grid. * serial running time: O(K^6+NPoints) for KxK grid * when compared with BlockLLS, NaiveLLS has ~K larger memory demand and ~K^2 larger running time. INPUT PARAMETERS: S - spline 2D builder object LambdaNS- nonsmoothness penalty -- ALGLIB -- Copyright 05.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetalgonaivells(spline2dbuilder &state, const double lambdans, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets area where 2D spline interpolant is built to user-defined one: [XA,XB]*[YA,YB] INPUT PARAMETERS: S - spline 2D builder object XA,XB - spatial extent in the first (X) dimension, XA<XB YA,YB - spatial extent in the second (Y) dimension, YA<YB -- ALGLIB -- Copyright 05.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetarea(spline2dbuilder &state, const double xa, const double xb, const double ya, const double yb, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets area where 2D spline interpolant is built. "Auto" means that area extent is determined automatically from dataset extent. INPUT PARAMETERS: S - spline 2D builder object -- ALGLIB -- Copyright 05.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetareaauto(spline2dbuilder &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets constant prior term (model is a sum of bicubic spline and global prior, which can be linear, constant, user-defined constant or zero). Constant prior term is determined by least squares fitting. INPUT PARAMETERS: S - spline builder -- ALGLIB -- Copyright 01.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetconstterm(spline2dbuilder &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets nodes count for 2D spline interpolant. Fitting is performed on area defined with one of the "setarea" functions; this one sets number of nodes placed upon the fitting area. INPUT PARAMETERS: S - spline 2D builder object KX - nodes count for the first (X) dimension; fitting interval [XA,XB] is separated into KX-1 subintervals, with KX nodes created at the boundaries. KY - nodes count for the first (Y) dimension; fitting interval [YA,YB] is separated into KY-1 subintervals, with KY nodes created at the boundaries. NOTE: at least 4 nodes is created in each dimension, so KX and KY are silently increased if needed. -- ALGLIB -- Copyright 05.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetgrid(spline2dbuilder &state, const ae_int_t kx, const ae_int_t ky, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets linear prior term (model is a sum of bicubic spline and global prior, which can be linear, constant, user-defined constant or zero). Linear prior term is determined by least squares fitting. INPUT PARAMETERS: S - spline builder -- ALGLIB -- Copyright 01.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetlinterm(spline2dbuilder &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function adds dataset to the builder object. This function overrides results of the previous calls, i.e. multiple calls of this function will result in only the last set being added. INPUT PARAMETERS: S - spline 2D builder object XY - points, array[N,2+D]. One row corresponds to one point in the dataset. First 2 elements are coordinates, next D elements are function values. Array may be larger than specified, in this case only leading [N,NX+NY] elements will be used. N - number of points in the dataset -- ALGLIB -- Copyright 05.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetpoints(spline2dbuilder &state, const real_2d_array &xy, const ae_int_t n, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets constant prior term (model is a sum of bicubic spline and global prior, which can be linear, constant, user-defined constant or zero). Constant prior term is determined by least squares fitting. INPUT PARAMETERS: S - spline builder V - value for user-defined prior -- ALGLIB -- Copyright 01.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetuserterm(spline2dbuilder &state, const double v, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets zero prior term (model is a sum of bicubic spline and global prior, which can be linear, constant, user-defined constant or zero). INPUT PARAMETERS: S - spline builder -- ALGLIB -- Copyright 01.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dbuildersetzeroterm(spline2dbuilder &state, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine builds a Hermite bicubic vector-valued spline. This function produces merely C1-continuous spline, i.e. the spline has smooth first derivatives. INPUT PARAMETERS: X - spline abscissas, array[N] N - N>=2: * if not given, automatically determined as len(X) * if given, only leading N elements of X are used Y - spline ordinates, array[M] M - M>=2: * if not given, automatically determined as len(Y) * if given, only leading M elements of Y are used F - function values, array[M*N*D]: * first D elements store D values at (X[0],Y[0]) * next D elements store D values at (X[1],Y[0]) * general form - D function values at (X[i],Y[j]) are stored at F[D*(J*N+I)...D*(J*N+I)+D-1]. dFdX- spline derivatives with respect to X, array[M*N*D]: * first D elements store D values at (X[0],Y[0]) * next D elements store D values at (X[1],Y[0]) * general form - D function values at (X[i],Y[j]) are stored at F[D*(J*N+I)...D*(J*N+I)+D-1]. dFdY- spline derivatives with respect to Y, array[M*N*D]: * first D elements store D values at (X[0],Y[0]) * next D elements store D values at (X[1],Y[0]) * general form - D function values at (X[i],Y[j]) are stored at F[D*(J*N+I)...D*(J*N+I)+D-1]. d2FdXdY-mixed derivatives with respect to X and Y, array[M*N*D]: * first D elements store D values at (X[0],Y[0]) * next D elements store D values at (X[1],Y[0]) * general form - D function values at (X[i],Y[j]) are stored at F[D*(J*N+I)...D*(J*N+I)+D-1]. D - vector dimension, D>=1: * D=1 means scalar-valued bicubic spline * D>1 means vector-valued bicubic spline OUTPUT PARAMETERS: C - spline interpolant -- ALGLIB PROJECT -- Copyright 2012-2023 by Bochkanov Sergey *************************************************************************/
void spline2dbuildhermitev(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &f, const real_1d_array &dfdx, const real_1d_array &dfdy, const real_1d_array &d2fdxdy, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault); void spline2dbuildhermitev(const real_1d_array &x, const real_1d_array &y, const real_1d_array &f, const real_1d_array &dfdx, const real_1d_array &dfdy, const real_1d_array &d2fdxdy, const ae_int_t d, spline2dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates the value of the bilinear or bicubic spline at the given point X. Input parameters: C - 2D spline object. Built by spline2dbuildbilinearv or spline2dbuildbicubicv. X, Y- point Result: S(x,y) -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/
double spline2dcalc(const spline2dinterpolant &c, const double x, const double y, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine calculates bilinear or bicubic vector-valued spline at the given point (X,Y). INPUT PARAMETERS: C - spline interpolant. X, Y- point OUTPUT PARAMETERS: F - array[D] which stores function values. F is out-parameter and it is reallocated after call to this function. In case you want to reuse previously allocated F, you may use Spline2DCalcVBuf(), which reallocates F only when it is too small. -- ALGLIB PROJECT -- Copyright 16.04.2012 by Bochkanov Sergey *************************************************************************/
void spline2dcalcv(const spline2dinterpolant &c, const double x, const double y, real_1d_array &f, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine calculates bilinear or bicubic vector-valued spline at the given point (X,Y). If you need just some specific component of vector-valued spline, you can use spline2dcalcvi() function. INPUT PARAMETERS: C - spline interpolant. X, Y- point F - output buffer, possibly preallocated array. In case array size is large enough to store result, it is not reallocated. Array which is too short will be reallocated OUTPUT PARAMETERS: F - array[D] (or larger) which stores function values -- ALGLIB PROJECT -- Copyright 01.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dcalcvbuf(const spline2dinterpolant &c, const double x, const double y, real_1d_array &f, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates specific component of vector-valued bilinear or bicubic spline at the given point (X,Y). INPUT PARAMETERS: C - spline interpolant. X, Y- point I - component index, in [0,D). An exception is generated for out of range values. RESULT: value of I-th component -- ALGLIB PROJECT -- Copyright 01.02.2018 by Bochkanov Sergey *************************************************************************/
double spline2dcalcvi(const spline2dinterpolant &c, const double x, const double y, const ae_int_t i, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine makes the copy of the spline model. Input parameters: C - spline interpolant Output parameters: CC - spline copy -- ALGLIB PROJECT -- Copyright 29.06.2007 by Bochkanov Sergey *************************************************************************/
void spline2dcopy(const spline2dinterpolant &c, spline2dinterpolant &cc, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine calculates the value of a bilinear or bicubic spline and its derivatives. Use Spline2DDiff2() if you need second derivatives Sxx and Syy. Input parameters: C - spline interpolant. X, Y- point Output parameters: F - S(x,y) FX - dS(x,y)/dX FY - dS(x,y)/dY -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/
void spline2ddiff(const spline2dinterpolant &c, const double x, const double y, double &f, double &fx, double &fy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates the value of a bilinear or bicubic spline and its second derivatives. Input parameters: C - spline interpolant. X, Y- point Output parameters: F - S(x,y) FX - dS(x,y)/dX FY - dS(x,y)/dY FXX - d2S(x,y)/dXdX FXY - d2S(x,y)/dXdY FYY - d2S(x,y)/dYdY -- ALGLIB PROJECT -- Copyright 17.04.2023 by Bochkanov Sergey. The second derivatives code was contributed by Horst Greiner under public domain terms. *************************************************************************/
void spline2ddiff2(const spline2dinterpolant &c, const double x, const double y, double &f, double &fx, double &fy, double &fxx, double &fxy, double &fyy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates the value and the derivatives of I-th component of a vector-valued bilinear or bicubic spline. Input parameters: C - spline interpolant. X, Y- point I - component index, in [0,D) Output parameters: F - S(x,y) FX - dS(x,y)/dX FY - dS(x,y)/dY FXX - d2S(x,y)/dXdX FXY - d2S(x,y)/dXdY FYY - d2S(x,y)/dYdY -- ALGLIB PROJECT -- Copyright 17.04.2023 by Bochkanov Sergey. The second derivatives code was contributed by Horst Greiner under public domain terms. *************************************************************************/
void spline2ddiff2vi(const spline2dinterpolant &c, const double x, const double y, const ae_int_t i, double &f, double &fx, double &fy, double &fxx, double &fxy, double &fyy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates the value and the derivatives of I-th component of a vector-valued bilinear or bicubic spline. Input parameters: C - spline interpolant. X, Y- point I - component index, in [0,D) Output parameters: F - S(x,y) FX - dS(x,y)/dX FY - dS(x,y)/dY -- ALGLIB PROJECT -- Copyright 05.07.2007 by Bochkanov Sergey *************************************************************************/
void spline2ddiffvi(const spline2dinterpolant &c, const double x, const double y, const ae_int_t i, double &f, double &fx, double &fy, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function fits bicubic spline to current dataset, using current area/ grid and current LLS solver. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. INPUT PARAMETERS: State - spline 2D builder object OUTPUT PARAMETERS: S - 2D spline, fit result Rep - fitting report, which provides some additional info about errors, R2 coefficient and so on. -- ALGLIB -- Copyright 05.02.2018 by Bochkanov Sergey *************************************************************************/
void spline2dfit(spline2dbuilder &state, spline2dinterpolant &s, spline2dfitreport &rep, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine performs linear transformation of the spline. Input parameters: C - spline interpolant. A, B- transformation coefficients: S2(x,y) = A*S(x,y) + B Output parameters: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/
void spline2dlintransf(spline2dinterpolant &c, const double a, const double b, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine performs linear transformation of the spline argument. Input parameters: C - spline interpolant AX, BX - transformation coefficients: x = A*t + B AY, BY - transformation coefficients: y = A*u + B Result: C - transformed spline -- ALGLIB PROJECT -- Copyright 30.06.2007 by Bochkanov Sergey *************************************************************************/
void spline2dlintransxy(spline2dinterpolant &c, const double ax, const double bx, const double ay, const double by, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* Bicubic spline resampling Input parameters: A - function values at the old grid, array[0..OldHeight-1, 0..OldWidth-1] OldHeight - old grid height, OldHeight>1 OldWidth - old grid width, OldWidth>1 NewHeight - new grid height, NewHeight>1 NewWidth - new grid width, NewWidth>1 Output parameters: B - function values at the new grid, array[0..NewHeight-1, 0..NewWidth-1] -- ALGLIB routine -- 15 May, 2007 Copyright by Bochkanov Sergey *************************************************************************/
void spline2dresamplebicubic(const real_2d_array &a, const ae_int_t oldheight, const ae_int_t oldwidth, real_2d_array &b, const ae_int_t newheight, const ae_int_t newwidth, const xparams _xparams = alglib::xdefault);
/************************************************************************* Bilinear spline resampling Input parameters: A - function values at the old grid, array[0..OldHeight-1, 0..OldWidth-1] OldHeight - old grid height, OldHeight>1 OldWidth - old grid width, OldWidth>1 NewHeight - new grid height, NewHeight>1 NewWidth - new grid width, NewWidth>1 Output parameters: B - function values at the new grid, array[0..NewHeight-1, 0..NewWidth-1] -- ALGLIB routine -- 09.07.2007 Copyright by Bochkanov Sergey *************************************************************************/
void spline2dresamplebilinear(const real_2d_array &a, const ae_int_t oldheight, const ae_int_t oldwidth, real_2d_array &b, const ae_int_t newheight, const ae_int_t newwidth, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function serializes data structure to string/stream. Important properties of s_out: * it contains alphanumeric characters, dots, underscores, minus signs * these symbols are grouped into words, which are separated by spaces and Windows-style (CR+LF) newlines * although serializer uses spaces and CR+LF as separators, you can replace any separator character by arbitrary combination of spaces, tabs, Windows or Unix newlines. It allows flexible reformatting of the string in case you want to include it into a text or XML file. But you should not insert separators into the middle of the "words" nor should you change the case of letters. * s_out can be freely moved between 32-bit and 64-bit systems, little and big endian machines, and so on. You can serialize structure on 32-bit machine and unserialize it on 64-bit one (or vice versa), or serialize it on SPARC and unserialize on x86. You can also serialize it in C++ version of ALGLIB and unserialize it in C# one, and vice versa. *************************************************************************/
void spline2dserialize(const spline2dinterpolant &obj, std::string &s_out); void spline2dserialize(const spline2dinterpolant &obj, std::ostream &s_out);
/************************************************************************* This subroutine was deprecated in ALGLIB 3.6.0 We recommend you to switch to Spline2DUnpackV(), which is more flexible and accepts its arguments in more convenient order. -- ALGLIB PROJECT -- Copyright 29.06.2007 by Bochkanov Sergey *************************************************************************/
void spline2dunpack(const spline2dinterpolant &c, ae_int_t &m, ae_int_t &n, real_2d_array &tbl, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine unpacks two-dimensional spline into the coefficients table Input parameters: C - spline interpolant. Result: M, N- grid size (x-axis and y-axis) D - number of components Tbl - coefficients table, unpacked format, D - components: [0..(N-1)*(M-1)*D-1, 0..20]. For T=0..D-1 (component index), I = 0...N-2 (x index), J=0..M-2 (y index): K := T + I*D + J*D*(N-1) K-th row stores decomposition for T-th component of the vector-valued function Tbl[K,0] = X[i] Tbl[K,1] = X[i+1] Tbl[K,2] = Y[j] Tbl[K,3] = Y[j+1] Tbl[K,4] = C00 Tbl[K,5] = C01 Tbl[K,6] = C02 Tbl[K,7] = C03 Tbl[K,8] = C10 Tbl[K,9] = C11 ... Tbl[K,19] = C33 Tbl[K,20] = 1 if the cell is present, 0 if the cell is missing. In the latter case Tbl[4..19] are exactly zero. On each grid square spline is equals to: S(x) = SUM(c[i,j]*(t^i)*(u^j), i=0..3, j=0..3) t = x-x[j] u = y-y[i] -- ALGLIB PROJECT -- Copyright 16.04.2012 by Bochkanov Sergey *************************************************************************/
void spline2dunpackv(const spline2dinterpolant &c, ae_int_t &m, ae_int_t &n, ae_int_t &d, real_2d_array &tbl, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function unserializes data structure from string/stream. *************************************************************************/
void spline2dunserialize(const std::string &s_in, spline2dinterpolant &obj); void spline2dunserialize(const std::istream &s_in, spline2dinterpolant &obj);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use bilinear spline to interpolate f(x,y)=x^2+2*y^2 sampled 
        // at (x,y) from [0.0, 0.5, 1.0] X [0.0, 1.0].
        //
        real_1d_array x = "[0.0, 0.5, 1.0]";
        real_1d_array y = "[0.0, 1.0]";
        real_1d_array f = "[0.00,0.25,1.00,2.00,2.25,3.00]";
        double vx = 0.25;
        double vy = 0.50;
        double v;
        double dx;
        double dy;
        spline2dinterpolant s;

        // build spline
        spline2dbuildbicubicv(x, 3, y, 2, f, 1, s);

        // calculate S(0.25,0.50)
        v = spline2dcalc(s, vx, vy);
        printf("%.4f\n", double(v)); // EXPECTED: 1.0625

        // calculate derivatives
        spline2ddiff(s, vx, vy, v, dx, dy);
        printf("%.4f\n", double(v)); // EXPECTED: 1.0625
        printf("%.4f\n", double(dx)); // EXPECTED: 0.5000
        printf("%.4f\n", double(dy)); // EXPECTED: 2.0000
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use bilinear spline to interpolate f(x,y)=x^2+2*y^2 sampled 
        // at (x,y) from [0.0, 0.5, 1.0] X [0.0, 1.0].
        //
        real_1d_array x = "[0.0, 0.5, 1.0]";
        real_1d_array y = "[0.0, 1.0]";
        real_1d_array f = "[0.00,0.25,1.00,2.00,2.25,3.00]";
        double vx = 0.25;
        double vy = 0.50;
        double v;
        spline2dinterpolant s;

        // build spline
        spline2dbuildbilinearv(x, 3, y, 2, f, 1, s);

        // calculate S(0.25,0.50)
        v = spline2dcalc(s, vx, vy);
        printf("%.4f\n", double(v)); // EXPECTED: 1.1250
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We build bilinear spline for f(x,y)=x+2*y for (x,y) in [0,1].
        // Then we apply several transformations to this spline.
        //
        real_1d_array x = "[0.0, 1.0]";
        real_1d_array y = "[0.0, 1.0]";
        real_1d_array f = "[0.00,1.00,2.00,3.00]";
        spline2dinterpolant s;
        spline2dinterpolant snew;
        double v;
        spline2dbuildbilinearv(x, 2, y, 2, f, 1, s);

        // copy spline, apply transformation x:=2*xnew, y:=4*ynew
        // evaluate at (xnew,ynew) = (0.25,0.25) - should be same as (x,y)=(0.5,1.0)
        spline2dcopy(s, snew);
        spline2dlintransxy(snew, 2.0, 0.0, 4.0, 0.0);
        v = spline2dcalc(snew, 0.25, 0.25);
        printf("%.4f\n", double(v)); // EXPECTED: 2.500

        // copy spline, apply transformation SNew:=2*S+3
        spline2dcopy(s, snew);
        spline2dlintransf(snew, 2.0, 3.0);
        v = spline2dcalc(snew, 0.5, 1.0);
        printf("%.4f\n", double(v)); // EXPECTED: 8.000

        //
        // Same example, but for vector spline (f0,f1) = {x+2*y, 2*x+y}
        //
        real_1d_array f2 = "[0.00,0.00, 1.00,2.00, 2.00,1.00, 3.00,3.00]";
        real_1d_array vr;
        spline2dbuildbilinearv(x, 2, y, 2, f2, 2, s);

        // copy spline, apply transformation x:=2*xnew, y:=4*ynew
        spline2dcopy(s, snew);
        spline2dlintransxy(snew, 2.0, 0.0, 4.0, 0.0);
        spline2dcalcv(snew, 0.25, 0.25, vr);
        printf("%s\n", vr.tostring(4).c_str()); // EXPECTED: [2.500,2.000]

        // copy spline, apply transformation SNew:=2*S+3
        spline2dcopy(s, snew);
        spline2dlintransf(snew, 2.0, 3.0);
        spline2dcalcv(snew, 0.5, 1.0, vr);
        printf("%s\n", vr.tostring(4).c_str()); // EXPECTED: [8.000,7.000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use bicubic spline to reproduce f(x,y)=1/(1+x^2+2*y^2) sampled
        // at irregular points (x,y) from [-1,+1]*[-1,+1]
        //
        // We have 5 such points, located approximately at corners of the area
        // and its center -  but not exactly at the grid. Thus, we have to FIT
        // the spline, i.e. to solve least squares problem
        //
        real_2d_array xy = "[[-0.987,-0.902,0.359],[0.948,-0.992,0.347],[-1.000,1.000,0.333],[1.000,0.973,0.339],[0.017,0.180,0.968]]";

        //
        // First step is to create spline2dbuilder object and set its properties:
        // * d=1 means that we create vector-valued spline with 1 component
        // * we specify dataset xy
        // * we rely on automatic selection of interpolation area
        // * we tell builder that we want to use 5x5 grid for an underlying spline
        // * we choose least squares solver named BlockLLS and configure it by
        //   telling that we want to apply zero nonlinearity penalty.
        //
        // NOTE: you can specify non-zero lambdav if you want to make your spline
        //       more "rigid", i.e. to penalize nonlinearity.
        //
        // NOTE: ALGLIB has two solvers which fit bicubic splines to irregular data,
        //       one of them is BlockLLS and another one is FastDDM. Former is
        //       intended for moderately sized grids (up to 512x512 nodes, although
        //       it may take up to few minutes); it is the most easy to use and
        //       control spline fitting function in the library. Latter, FastDDM,
        //       is intended for efficient solution of large-scale problems
        //       (up to 100.000.000 nodes). Both solvers can be parallelized, but
        //       FastDDM is much more efficient. See comments for more information.
        //
        spline2dbuilder builder;
        ae_int_t d = 1;
        double lambdav = 0.000;
        spline2dbuildercreate(d, builder);
        spline2dbuildersetpoints(builder, xy, 5);
        spline2dbuildersetgrid(builder, 5, 5);
        spline2dbuildersetalgoblocklls(builder, lambdav);

        //
        // Now we are ready to fit and evaluate our results
        //
        spline2dinterpolant s;
        spline2dfitreport rep;
        spline2dfit(builder, s, rep);

        // evaluate results - function value at the grid is reproduced exactly
        double v;
        v = spline2dcalc(s, -1, 1);
        printf("%.2f\n", double(v)); // EXPECTED: 0.333000

        // check maximum error - it must be nearly zero
        printf("%.2f\n", double(rep.maxerror)); // EXPECTED: 0.000
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We build bilinear spline for f(x,y)=x+2*y+3*xy for (x,y) in [0,1].
        // Then we demonstrate how to unpack it.
        //
        real_1d_array x = "[0.0, 1.0]";
        real_1d_array y = "[0.0, 1.0]";
        real_1d_array f = "[0.00,1.00,2.00,6.00]";
        real_2d_array c;
        ae_int_t m;
        ae_int_t n;
        ae_int_t d;
        spline2dinterpolant s;

        // build spline
        spline2dbuildbilinearv(x, 2, y, 2, f, 1, s);

        // unpack and test
        spline2dunpackv(s, m, n, d, c);
        printf("%s\n", c.tostring(4).c_str()); // EXPECTED: [[0, 1, 0, 1, 0,2,0,0, 1,3,0,0, 0,0,0,0, 0,0,0,0, 1]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We build bilinear vector-valued spline (f0,f1) = {x+2*y, 2*x+y}
        // Spline is built using function values at 2x2 grid: (x,y)=[0,1]*[0,1]
        // Then we perform evaluation at (x,y)=(0.1,0.3)
        //
        real_1d_array x = "[0.0, 1.0]";
        real_1d_array y = "[0.0, 1.0]";
        real_1d_array f = "[0.00,0.00, 1.00,2.00, 2.00,1.00, 3.00,3.00]";
        spline2dinterpolant s;
        real_1d_array vr;
        spline2dbuildbilinearv(x, 2, y, 2, f, 2, s);
        spline2dcalcv(s, 0.1, 0.3, vr);
        printf("%s\n", vr.tostring(4).c_str()); // EXPECTED: [0.700,0.500]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

spline3dinterpolant
spline3dbuildtrilinearv
spline3dbuildtrilinearvbuf
spline3dcalc
spline3dcalcv
spline3dcalcvbuf
spline3dlintransf
spline3dlintransxyz
spline3dresampletrilinear
spline3dunpackv
spline3d_trilinear Trilinear spline interpolation
spline3d_vector Vector-valued trilinear spline interpolation
/************************************************************************* 3-dimensional spline inteprolant *************************************************************************/
class spline3dinterpolant { public: spline3dinterpolant(); spline3dinterpolant(const spline3dinterpolant &rhs); spline3dinterpolant& operator=(const spline3dinterpolant &rhs); virtual ~spline3dinterpolant(); };
/************************************************************************* This subroutine builds trilinear vector-valued spline. INPUT PARAMETERS: X - spline abscissas, array[0..N-1] Y - spline ordinates, array[0..M-1] Z - spline applicates, array[0..L-1] F - function values, array[0..M*N*L*D-1]: * first D elements store D values at (X[0],Y[0],Z[0]) * next D elements store D values at (X[1],Y[0],Z[0]) * next D elements store D values at (X[2],Y[0],Z[0]) * ... * next D elements store D values at (X[0],Y[1],Z[0]) * next D elements store D values at (X[1],Y[1],Z[0]) * next D elements store D values at (X[2],Y[1],Z[0]) * ... * next D elements store D values at (X[0],Y[0],Z[1]) * next D elements store D values at (X[1],Y[0],Z[1]) * next D elements store D values at (X[2],Y[0],Z[1]) * ... * general form - D function values at (X[i],Y[j]) are stored at F[D*(N*(M*K+J)+I)...D*(N*(M*K+J)+I)+D-1]. M,N, L - grid size, M>=2, N>=2, L>=2 D - vector dimension, D>=1 OUTPUT PARAMETERS: C - spline interpolant -- ALGLIB PROJECT -- Copyright 26.04.2012 by Bochkanov Sergey *************************************************************************/
void spline3dbuildtrilinearv(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &z, const ae_int_t l, const real_1d_array &f, const ae_int_t d, spline3dinterpolant &c, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This subroutine builds trilinear vector-valued spline. Buffered version of Spline3DBuildTrilinearV() which reuses memory previously allocated in C as much as possible. -- ALGLIB PROJECT -- Copyright 26.04.2012 by Bochkanov Sergey *************************************************************************/
void spline3dbuildtrilinearvbuf(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, const real_1d_array &z, const ae_int_t l, const real_1d_array &f, const ae_int_t d, spline3dinterpolant &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine calculates the value of the trilinear or tricubic spline at the given point (X,Y,Z). INPUT PARAMETERS: C - coefficients table. Built by BuildBilinearSpline or BuildBicubicSpline. X, Y, Z - point Result: S(x,y,z) -- ALGLIB PROJECT -- Copyright 26.04.2012 by Bochkanov Sergey *************************************************************************/
double spline3dcalc(const spline3dinterpolant &c, const double x, const double y, const double z, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine calculates trilinear or tricubic vector-valued spline at the given point (X,Y,Z). INPUT PARAMETERS: C - spline interpolant. X, Y, Z - point OUTPUT PARAMETERS: F - array[D] which stores function values. F is out-parameter and it is reallocated after call to this function. In case you want to reuse previously allocated F, you may use Spline2DCalcVBuf(), which reallocates F only when it is too small. -- ALGLIB PROJECT -- Copyright 26.04.2012 by Bochkanov Sergey *************************************************************************/
void spline3dcalcv(const spline3dinterpolant &c, const double x, const double y, const double z, real_1d_array &f, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This subroutine calculates bilinear or bicubic vector-valued spline at the given point (X,Y,Z). INPUT PARAMETERS: C - spline interpolant. X, Y, Z - point F - output buffer, possibly preallocated array. In case array size is large enough to store result, it is not reallocated. Array which is too short will be reallocated OUTPUT PARAMETERS: F - array[D] (or larger) which stores function values -- ALGLIB PROJECT -- Copyright 26.04.2012 by Bochkanov Sergey *************************************************************************/
void spline3dcalcvbuf(const spline3dinterpolant &c, const double x, const double y, const double z, real_1d_array &f, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine performs linear transformation of the spline. INPUT PARAMETERS: C - spline interpolant. A, B- transformation coefficients: S2(x,y) = A*S(x,y,z) + B OUTPUT PARAMETERS: C - transformed spline -- ALGLIB PROJECT -- Copyright 26.04.2012 by Bochkanov Sergey *************************************************************************/
void spline3dlintransf(spline3dinterpolant &c, const double a, const double b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine performs linear transformation of the spline argument. INPUT PARAMETERS: C - spline interpolant AX, BX - transformation coefficients: x = A*u + B AY, BY - transformation coefficients: y = A*v + B AZ, BZ - transformation coefficients: z = A*w + B OUTPUT PARAMETERS: C - transformed spline -- ALGLIB PROJECT -- Copyright 26.04.2012 by Bochkanov Sergey *************************************************************************/
void spline3dlintransxyz(spline3dinterpolant &c, const double ax, const double bx, const double ay, const double by, const double az, const double bz, const xparams _xparams = alglib::xdefault);
/************************************************************************* Trilinear spline resampling INPUT PARAMETERS: A - array[0..OldXCount*OldYCount*OldZCount-1], function values at the old grid, : A[0] x=0,y=0,z=0 A[1] x=1,y=0,z=0 A[..] ... A[..] x=oldxcount-1,y=0,z=0 A[..] x=0,y=1,z=0 A[..] ... ... OldZCount - old Z-count, OldZCount>1 OldYCount - old Y-count, OldYCount>1 OldXCount - old X-count, OldXCount>1 NewZCount - new Z-count, NewZCount>1 NewYCount - new Y-count, NewYCount>1 NewXCount - new X-count, NewXCount>1 OUTPUT PARAMETERS: B - array[0..NewXCount*NewYCount*NewZCount-1], function values at the new grid: B[0] x=0,y=0,z=0 B[1] x=1,y=0,z=0 B[..] ... B[..] x=newxcount-1,y=0,z=0 B[..] x=0,y=1,z=0 B[..] ... ... -- ALGLIB routine -- 26.04.2012 Copyright by Bochkanov Sergey *************************************************************************/
void spline3dresampletrilinear(const real_1d_array &a, const ae_int_t oldzcount, const ae_int_t oldycount, const ae_int_t oldxcount, const ae_int_t newzcount, const ae_int_t newycount, const ae_int_t newxcount, real_1d_array &b, const xparams _xparams = alglib::xdefault);
/************************************************************************* This subroutine unpacks tri-dimensional spline into the coefficients table INPUT PARAMETERS: C - spline interpolant. Result: N - grid size (X) M - grid size (Y) L - grid size (Z) D - number of components SType- spline type. Currently, only one spline type is supported: trilinear spline, as indicated by SType=1. Tbl - spline coefficients: [0..(N-1)*(M-1)*(L-1)*D-1, 0..13]. For T=0..D-1 (component index), I = 0...N-2 (x index), J=0..M-2 (y index), K=0..L-2 (z index): Q := T + I*D + J*D*(N-1) + K*D*(N-1)*(M-1), Q-th row stores decomposition for T-th component of the vector-valued function Tbl[Q,0] = X[i] Tbl[Q,1] = X[i+1] Tbl[Q,2] = Y[j] Tbl[Q,3] = Y[j+1] Tbl[Q,4] = Z[k] Tbl[Q,5] = Z[k+1] Tbl[Q,6] = C000 Tbl[Q,7] = C100 Tbl[Q,8] = C010 Tbl[Q,9] = C110 Tbl[Q,10]= C001 Tbl[Q,11]= C101 Tbl[Q,12]= C011 Tbl[Q,13]= C111 On each grid square spline is equals to: S(x) = SUM(c[i,j,k]*(x^i)*(y^j)*(z^k), i=0..1, j=0..1, k=0..1) t = x-x[j] u = y-y[i] v = z-z[k] NOTE: format of Tbl is given for SType=1. Future versions of ALGLIB can use different formats for different values of SType. -- ALGLIB PROJECT -- Copyright 26.04.2012 by Bochkanov Sergey *************************************************************************/
void spline3dunpackv(const spline3dinterpolant &c, ae_int_t &n, ae_int_t &m, ae_int_t &l, ae_int_t &d, ae_int_t &stype, real_2d_array &tbl, const xparams _xparams = alglib::xdefault);
#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use trilinear spline to interpolate f(x,y,z)=x+xy+z sampled 
        // at (x,y,z) from [0.0, 1.0] X [0.0, 1.0] X [0.0, 1.0].
        //
        // We store x, y and z-values at local arrays with same names.
        // Function values are stored in the array F as follows:
        //     f[0]     (x,y,z) = (0,0,0)
        //     f[1]     (x,y,z) = (1,0,0)
        //     f[2]     (x,y,z) = (0,1,0)
        //     f[3]     (x,y,z) = (1,1,0)
        //     f[4]     (x,y,z) = (0,0,1)
        //     f[5]     (x,y,z) = (1,0,1)
        //     f[6]     (x,y,z) = (0,1,1)
        //     f[7]     (x,y,z) = (1,1,1)
        //
        real_1d_array x = "[0.0, 1.0]";
        real_1d_array y = "[0.0, 1.0]";
        real_1d_array z = "[0.0, 1.0]";
        real_1d_array f = "[0,1,0,2,1,2,1,3]";
        double vx = 0.50;
        double vy = 0.50;
        double vz = 0.50;
        double v;
        spline3dinterpolant s;

        // build spline
        spline3dbuildtrilinearv(x, 2, y, 2, z, 2, f, 1, s);

        // calculate S(0.5,0.5,0.5)
        v = spline3dcalc(s, vx, vy, vz);
        printf("%.4f\n", double(v)); // EXPECTED: 1.2500
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "interpolation.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // We use trilinear vector-valued spline to interpolate {f0,f1}={x+xy+z,x+xy+yz+z}
        // sampled at (x,y,z) from [0.0, 1.0] X [0.0, 1.0] X [0.0, 1.0].
        //
        // We store x, y and z-values at local arrays with same names.
        // Function values are stored in the array F as follows:
        //     f[0]     f0, (x,y,z) = (0,0,0)
        //     f[1]     f1, (x,y,z) = (0,0,0)
        //     f[2]     f0, (x,y,z) = (1,0,0)
        //     f[3]     f1, (x,y,z) = (1,0,0)
        //     f[4]     f0, (x,y,z) = (0,1,0)
        //     f[5]     f1, (x,y,z) = (0,1,0)
        //     f[6]     f0, (x,y,z) = (1,1,0)
        //     f[7]     f1, (x,y,z) = (1,1,0)
        //     f[8]     f0, (x,y,z) = (0,0,1)
        //     f[9]     f1, (x,y,z) = (0,0,1)
        //     f[10]    f0, (x,y,z) = (1,0,1)
        //     f[11]    f1, (x,y,z) = (1,0,1)
        //     f[12]    f0, (x,y,z) = (0,1,1)
        //     f[13]    f1, (x,y,z) = (0,1,1)
        //     f[14]    f0, (x,y,z) = (1,1,1)
        //     f[15]    f1, (x,y,z) = (1,1,1)
        //
        real_1d_array x = "[0.0, 1.0]";
        real_1d_array y = "[0.0, 1.0]";
        real_1d_array z = "[0.0, 1.0]";
        real_1d_array f = "[0,0, 1,1, 0,0, 2,2, 1,1, 2,2, 1,2, 3,4]";
        double vx = 0.50;
        double vy = 0.50;
        double vz = 0.50;
        spline3dinterpolant s;

        // build spline
        spline3dbuildtrilinearv(x, 2, y, 2, z, 2, f, 2, s);

        // calculate S(0.5,0.5,0.5) - we have vector of values instead of single value
        real_1d_array v;
        spline3dcalcv(s, vx, vy, vz, v);
        printf("%s\n", v.tostring(4).c_str()); // EXPECTED: [1.2500,1.5000]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

ssamodel
ssaaddsequence
ssaanalyzelast
ssaanalyzelastwindow
ssaanalyzesequence
ssaappendpointandupdate
ssaappendsequenceandupdate
ssacleardata
ssacreate
ssaforecastavglast
ssaforecastavgsequence
ssaforecastlast
ssaforecastsequence
ssagetbasis
ssagetlrr
ssasetalgoprecomputed
ssasetalgotopkdirect
ssasetalgotopkrealtime
ssasetmemorylimit
ssasetpoweruplength
ssasetseed
ssasetwindow
ssa_d_basic Simple SSA analysis demo
ssa_d_forecast Simple SSA forecasting demo
ssa_d_realtime Real-time SSA algorithm with fast incremental updates
/************************************************************************* This object stores state of the SSA model. You should use ALGLIB functions to work with this object. *************************************************************************/
class ssamodel { public: ssamodel(); ssamodel(const ssamodel &rhs); ssamodel& operator=(const ssamodel &rhs); virtual ~ssamodel(); };
/************************************************************************* This function adds data sequence to SSA model. Only single-dimensional sequences are supported. What is a sequences? Following definitions/requirements apply: * a sequence is an array of values measured in subsequent, equally separated time moments (ticks). * you may have many sequences in your dataset; say, one sequence may correspond to one trading session. * sequence length should be larger than current window length (shorter sequences will be ignored during analysis). * analysis is performed within a sequence; different sequences are NOT stacked together to produce one large contiguous stream of data. * analysis is performed for all sequences at once, i.e. same set of basis vectors is computed for all sequences INCREMENTAL ANALYSIS This function is non intended for incremental updates of previously found SSA basis. Calling it invalidates all previous analysis results (basis is reset and will be recalculated from zero during next analysis). If you want to perform incremental/real-time SSA, consider using following functions: * ssaappendpointandupdate() for appending one point * ssaappendsequenceandupdate() for appending new sequence INPUT PARAMETERS: S - SSA model created with ssacreate() X - array[N], data, can be larger (additional values are ignored) N - data length, can be automatically determined from the array length. N>=0. OUTPUT PARAMETERS: S - SSA model, updated NOTE: you can clear dataset with ssacleardata() -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaaddsequence(ssamodel &s, const real_1d_array &x, const ae_int_t n, const xparams _xparams = alglib::xdefault); void ssaaddsequence(ssamodel &s, const real_1d_array &x, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This function: * builds SSA basis using internally stored (entire) dataset * returns reconstruction for the last NTicks of the last sequence If you want to analyze some other sequence, use ssaanalyzesequence(). Reconstruction phase involves generation of NTicks-WindowWidth sliding windows, their decomposition using empirical orthogonal functions found by SSA, followed by averaging of each data point across several overlapping windows. Thus, every point in the output trend is reconstructed using up to WindowWidth overlapping windows (WindowWidth windows exactly in the inner points, just one window at the extremal points). IMPORTANT: due to averaging this function returns different results for different values of NTicks. It is expected and not a bug. For example: * Trend[NTicks-1] is always same because it is not averaged in any case (same applies to Trend[0]). * Trend[NTicks-2] has different values for NTicks=WindowWidth and NTicks=WindowWidth+1 because former case means that no averaging is performed, and latter case means that averaging using two sliding windows is performed. Larger values of NTicks produce same results as NTicks=WindowWidth+1. * ...and so on... PERFORMANCE: this function has O((NTicks-WindowWidth)*WindowWidth*NBasis) running time. If you work in time-constrained setting and have to analyze just a few last ticks, choosing NTicks equal to WindowWidth+SmoothingLen, with SmoothingLen=1...WindowWidth will result in good compromise between noise cancellation and analysis speed. INPUT PARAMETERS: S - SSA model NTicks - number of ticks to analyze, Nticks>=1. * special case of NTicks<=WindowWidth is handled by analyzing last window and returning NTicks last ticks. * special case NTicks>LastSequenceLen is handled by prepending result with NTicks-LastSequenceLen zeros. OUTPUT PARAMETERS: Trend - array[NTicks], reconstructed trend line Noise - array[NTicks], the rest of the signal; it holds that ActualData = Trend+Noise. CACHING/REUSE OF THE BASIS Caching/reuse of previous results is performed: * first call performs full run of SSA; basis is stored in the cache * subsequent calls reuse previously cached basis * if you call any function which changes model properties (window length, algorithm, dataset), internal basis will be invalidated. * the only calls which do NOT invalidate basis are listed below: a) ssasetwindow() with same window length b) ssaappendpointandupdate() c) ssaappendsequenceandupdate() d) ssasetalgotopk...() with exactly same K Calling these functions will result in reuse of previously found basis. In any case, only basis is reused. Reconstruction is performed from scratch every time you call this function. HANDLING OF DEGENERATE CASES Following degenerate cases may happen: * dataset is empty (no analysis can be done) * all sequences are shorter than the window length,no analysis can be done * no algorithm is specified (no analysis can be done) * last sequence is shorter than the window length (analysis can be done, but we can not perform reconstruction on the last sequence) Calling this function in degenerate cases returns following result: * in any case, NTicks ticks is returned * trend is assumed to be zero * noise is initialized by the last sequence; if last sequence is shorter than the window size, it is moved to the end of the array, and the beginning of the noise array is filled by zeros No analysis is performed in degenerate cases (we immediately return dummy values, no basis is constructed). -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaanalyzelast(ssamodel &s, const ae_int_t nticks, real_1d_array &trend, real_1d_array &noise, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function executes SSA on internally stored dataset and returns analysis for the last window of the last sequence. Such analysis is an lightweight alternative for full scale reconstruction (see below). Typical use case for this function is real-time setting, when you are interested in quick-and-dirty (very quick and very dirty) processing of just a few last ticks of the trend. IMPORTANT: full scale SSA involves analysis of the ENTIRE dataset, with reconstruction being done for all positions of sliding window with subsequent hankelization (diagonal averaging) of the resulting matrix. Such analysis requires O((DataLen-Window)*Window*NBasis) FLOPs and can be quite costly. However, it has nice noise-canceling effects due to averaging. This function performs REDUCED analysis of the last window. It is much faster - just O(Window*NBasis), but its results are DIFFERENT from that of ssaanalyzelast(). In particular, first few points of the trend are much more prone to noise. INPUT PARAMETERS: S - SSA model OUTPUT PARAMETERS: Trend - array[WindowSize], reconstructed trend line Noise - array[WindowSize], the rest of the signal; it holds that ActualData = Trend+Noise. NTicks - current WindowSize CACHING/REUSE OF THE BASIS Caching/reuse of previous results is performed: * first call performs full run of SSA; basis is stored in the cache * subsequent calls reuse previously cached basis * if you call any function which changes model properties (window length, algorithm, dataset), internal basis will be invalidated. * the only calls which do NOT invalidate basis are listed below: a) ssasetwindow() with same window length b) ssaappendpointandupdate() c) ssaappendsequenceandupdate() d) ssasetalgotopk...() with exactly same K Calling these functions will result in reuse of previously found basis. In any case, only basis is reused. Reconstruction is performed from scratch every time you call this function. HANDLING OF DEGENERATE CASES Following degenerate cases may happen: * dataset is empty (no analysis can be done) * all sequences are shorter than the window length,no analysis can be done * no algorithm is specified (no analysis can be done) * last sequence is shorter than the window length (analysis can be done, but we can not perform reconstruction on the last sequence) Calling this function in degenerate cases returns following result: * in any case, WindowWidth ticks is returned * trend is assumed to be zero * noise is initialized by the last sequence; if last sequence is shorter than the window size, it is moved to the end of the array, and the beginning of the noise array is filled by zeros No analysis is performed in degenerate cases (we immediately return dummy values, no basis is constructed). -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaanalyzelastwindow(ssamodel &s, real_1d_array &trend, real_1d_array &noise, ae_int_t &nticks, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function: * builds SSA basis using internally stored (entire) dataset * returns reconstruction for the sequence being passed to this function If you want to analyze last sequence stored in the model, use ssaanalyzelast(). Reconstruction phase involves generation of NTicks-WindowWidth sliding windows, their decomposition using empirical orthogonal functions found by SSA, followed by averaging of each data point across several overlapping windows. Thus, every point in the output trend is reconstructed using up to WindowWidth overlapping windows (WindowWidth windows exactly in the inner points, just one window at the extremal points). PERFORMANCE: this function has O((NTicks-WindowWidth)*WindowWidth*NBasis) running time. If you work in time-constrained setting and have to analyze just a few last ticks, choosing NTicks equal to WindowWidth+SmoothingLen, with SmoothingLen=1...WindowWidth will result in good compromise between noise cancellation and analysis speed. INPUT PARAMETERS: S - SSA model Data - array[NTicks], can be larger (only NTicks leading elements will be used) NTicks - number of ticks to analyze, Nticks>=1. * special case of NTicks<WindowWidth is handled by returning zeros as trend, and signal as noise OUTPUT PARAMETERS: Trend - array[NTicks], reconstructed trend line Noise - array[NTicks], the rest of the signal; it holds that ActualData = Trend+Noise. CACHING/REUSE OF THE BASIS Caching/reuse of previous results is performed: * first call performs full run of SSA; basis is stored in the cache * subsequent calls reuse previously cached basis * if you call any function which changes model properties (window length, algorithm, dataset), internal basis will be invalidated. * the only calls which do NOT invalidate basis are listed below: a) ssasetwindow() with same window length b) ssaappendpointandupdate() c) ssaappendsequenceandupdate() d) ssasetalgotopk...() with exactly same K Calling these functions will result in reuse of previously found basis. In any case, only basis is reused. Reconstruction is performed from scratch every time you call this function. HANDLING OF DEGENERATE CASES Following degenerate cases may happen: * dataset is empty (no analysis can be done) * all sequences are shorter than the window length,no analysis can be done * no algorithm is specified (no analysis can be done) * sequence being passed is shorter than the window length Calling this function in degenerate cases returns following result: * in any case, NTicks ticks is returned * trend is assumed to be zero * noise is initialized by the sequence. No analysis is performed in degenerate cases (we immediately return dummy values, no basis is constructed). -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaanalyzesequence(ssamodel &s, const real_1d_array &data, const ae_int_t nticks, real_1d_array &trend, real_1d_array &noise, const xparams _xparams = alglib::xdefault); void ssaanalyzesequence(ssamodel &s, const real_1d_array &data, real_1d_array &trend, real_1d_array &noise, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This function appends single point to last data sequence stored in the SSA model and tries to update model in the incremental manner (if possible with current algorithm). If you want to add more than one point at once: * if you want to add M points to the same sequence, perform M-1 calls with UpdateIts parameter set to 0.0, and last call with non-zero UpdateIts. * if you want to add new sequence, use ssaappendsequenceandupdate() Running time of this function does NOT depend on dataset size, only on window width and number of singular vectors. Depending on algorithm being used, incremental update has complexity: * for top-K real time - O(UpdateIts*K*Width^2), with fractional UpdateIts * for top-K direct - O(Width^3) for any non-zero UpdateIts * for precomputed basis - O(1), no update is performed INPUT PARAMETERS: S - SSA model created with ssacreate() X - new point UpdateIts - >=0, floating point (!) value, desired update frequency: * zero value means that point is stored, but no update is performed * integer part of the value means that specified number of iterations is always performed * fractional part of the value means that one iteration is performed with this probability. Recommended value: 0<UpdateIts<=1. Values larger than 1 are VERY seldom needed. If your dataset changes slowly, you can set it to 0.1 and skip 90% of updates. In any case, no information is lost even with zero value of UpdateIts! It will be incorporated into model, sooner or later. OUTPUT PARAMETERS: S - SSA model, updated NOTE: this function uses internal RNG to handle fractional values of UpdateIts. By default it is initialized with fixed seed during initial calculation of basis. Thus subsequent calls to this function will result in the same sequence of pseudorandom decisions. However, if you have several SSA models which are calculated simultaneously, and if you want to reduce computational bottlenecks by performing random updates at random moments, then fixed seed is not an option - all updates will fire at same moments. You may change it with ssasetseed() function. NOTE: this function throws an exception if called for empty dataset (there is no "last" sequence to modify). -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaappendpointandupdate(ssamodel &s, const double x, const double updateits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function appends new sequence to dataset stored in the SSA model and tries to update model in the incremental manner (if possible with current algorithm). Notes: * if you want to add M sequences at once, perform M-1 calls with UpdateIts parameter set to 0.0, and last call with non-zero UpdateIts. * if you want to add just one point, use ssaappendpointandupdate() Running time of this function does NOT depend on dataset size, only on sequence length, window width and number of singular vectors. Depending on algorithm being used, incremental update has complexity: * for top-K real time - O(UpdateIts*K*Width^2+(NTicks-Width)*Width^2) * for top-K direct - O(Width^3+(NTicks-Width)*Width^2) * for precomputed basis - O(1), no update is performed INPUT PARAMETERS: S - SSA model created with ssacreate() X - new sequence, array[NTicks] or larget NTicks - >=1, number of ticks in the sequence UpdateIts - >=0, floating point (!) value, desired update frequency: * zero value means that point is stored, but no update is performed * integer part of the value means that specified number of iterations is always performed * fractional part of the value means that one iteration is performed with this probability. Recommended value: 0<UpdateIts<=1. Values larger than 1 are VERY seldom needed. If your dataset changes slowly, you can set it to 0.1 and skip 90% of updates. In any case, no information is lost even with zero value of UpdateIts! It will be incorporated into model, sooner or later. OUTPUT PARAMETERS: S - SSA model, updated NOTE: this function uses internal RNG to handle fractional values of UpdateIts. By default it is initialized with fixed seed during initial calculation of basis. Thus subsequent calls to this function will result in the same sequence of pseudorandom decisions. However, if you have several SSA models which are calculated simultaneously, and if you want to reduce computational bottlenecks by performing random updates at random moments, then fixed seed is not an option - all updates will fire at same moments. You may change it with ssasetseed() function. -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaappendsequenceandupdate(ssamodel &s, const real_1d_array &x, const ae_int_t nticks, const double updateits, const xparams _xparams = alglib::xdefault); void ssaappendsequenceandupdate(ssamodel &s, const real_1d_array &x, const double updateits, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function clears all data stored in the model and invalidates all basis components found so far. INPUT PARAMETERS: S - SSA model created with ssacreate() OUTPUT PARAMETERS: S - SSA model, updated -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssacleardata(ssamodel &s, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function creates SSA model object. Right after creation model is in "dummy" mode - you can add data, but analyzing/prediction will return just zeros (it assumes that basis is empty). HOW TO USE SSA MODEL: 1. create model with ssacreate() 2. add data with one/many ssaaddsequence() calls 3. choose SSA algorithm with one of ssasetalgo...() functions: * ssasetalgotopkdirect() for direct one-run analysis * ssasetalgotopkrealtime() for algorithm optimized for many subsequent runs with warm-start capabilities * ssasetalgoprecomputed() for user-supplied basis 4. set window width with ssasetwindow() 5. perform one of the analysis-related activities: a) call ssagetbasis() to get basis b) call ssaanalyzelast() ssaanalyzesequence() or ssaanalyzelastwindow() to perform analysis (trend/noise separation) c) call one of the forecasting functions (ssaforecastlast() or ssaforecastsequence()) to perform prediction; alternatively, you can extract linear recurrence coefficients with ssagetlrr(). SSA analysis will be performed during first call to analysis-related function. SSA model is smart enough to track all changes in the dataset and model settings, to cache previously computed basis and to re-evaluate basis only when necessary. Additionally, if your setting involves constant stream of incoming data, you can perform quick update already calculated model with one of the incremental append-and-update functions: ssaappendpointandupdate() or ssaappendsequenceandupdate(). NOTE: steps (2), (3), (4) can be performed in arbitrary order. INPUT PARAMETERS: none OUTPUT PARAMETERS: S - structure which stores model state -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssacreate(ssamodel &s, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

/************************************************************************* This function builds SSA basis and performs forecasting for a specified number of ticks, returning value of trend. Forecast is performed as follows: * SSA trend extraction is applied to last M sliding windows of the internally stored dataset * for each of M sliding windows, M predictions are built * average value of M predictions is returned This function has following running time: * O(NBasis*WindowWidth*M) for trend extraction phase (always performed) * O(WindowWidth*NTicks*M) for forecast phase NOTE: noise reduction is ALWAYS applied by this algorithm; if you want to apply recurrence relation to raw unprocessed data, use another function - ssaforecastsequence() which allows to turn on and off noise reduction phase. NOTE: combination of several predictions results in lesser sensitivity to noise, but it may produce undesirable discontinuities between last point of the trend and first point of the prediction. The reason is that last point of the trend is usually corrupted by noise, but average value of several predictions is less sensitive to noise, thus discontinuity appears. It is not a bug. INPUT PARAMETERS: S - SSA model M - number of sliding windows to combine, M>=1. If your dataset has less than M sliding windows, this parameter will be silently reduced. NTicks - number of ticks to forecast, NTicks>=1 OUTPUT PARAMETERS: Trend - array[NTicks], predicted trend line CACHING/REUSE OF THE BASIS Caching/reuse of previous results is performed: * first call performs full run of SSA; basis is stored in the cache * subsequent calls reuse previously cached basis * if you call any function which changes model properties (window length, algorithm, dataset), internal basis will be invalidated. * the only calls which do NOT invalidate basis are listed below: a) ssasetwindow() with same window length b) ssaappendpointandupdate() c) ssaappendsequenceandupdate() d) ssasetalgotopk...() with exactly same K Calling these functions will result in reuse of previously found basis. HANDLING OF DEGENERATE CASES Following degenerate cases may happen: * dataset is empty (no analysis can be done) * all sequences are shorter than the window length,no analysis can be done * no algorithm is specified (no analysis can be done) * last sequence is shorter than the WindowWidth (analysis can be done, but we can not perform forecasting on the last sequence) * window lentgh is 1 (impossible to use for forecasting) * SSA analysis algorithm is configured to extract basis whose size is equal to window length (impossible to use for forecasting; only basis whose size is less than window length can be used). Calling this function in degenerate cases returns following result: * NTicks copies of the last value is returned for non-empty task with large enough dataset, but with overcomplete basis (window width=1 or basis size is equal to window width) * zero trend with length=NTicks is returned for empty task No analysis is performed in degenerate cases (we immediately return dummy values, no basis is ever constructed). -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaforecastavglast(ssamodel &s, const ae_int_t m, const ae_int_t nticks, real_1d_array &trend, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function builds SSA basis and performs forecasting for a user- specified sequence, returning value of trend. Forecasting is done in two stages: * first, we extract trend from M last sliding windows of the sequence. This stage is optional, you can turn it off if you pass data which are already processed with SSA. Of course, you can turn it off even for raw data, but it is not recommended - noise suppression is very important for correct prediction. * then, we apply LRR independently for M sliding windows * average of M predictions is returned This function has following running time: * O(NBasis*WindowWidth*M) for trend extraction phase * O(WindowWidth*NTicks*M) for forecast phase NOTE: combination of several predictions results in lesser sensitivity to noise, but it may produce undesirable discontinuities between last point of the trend and first point of the prediction. The reason is that last point of the trend is usually corrupted by noise, but average value of several predictions is less sensitive to noise, thus discontinuity appears. It is not a bug. INPUT PARAMETERS: S - SSA model Data - array[NTicks], data to forecast DataLen - number of ticks in the data, DataLen>=1 M - number of sliding windows to combine, M>=1. If your dataset has less than M sliding windows, this parameter will be silently reduced. ForecastLen - number of ticks to predict, ForecastLen>=1 ApplySmoothing - whether to apply smoothing trend extraction or not. if you do not know what to specify, pass true. OUTPUT PARAMETERS: Trend - array[ForecastLen], forecasted trend CACHING/REUSE OF THE BASIS Caching/reuse of previous results is performed: * first call performs full run of SSA; basis is stored in the cache * subsequent calls reuse previously cached basis * if you call any function which changes model properties (window length, algorithm, dataset), internal basis will be invalidated. * the only calls which do NOT invalidate basis are listed below: a) ssasetwindow() with same window length b) ssaappendpointandupdate() c) ssaappendsequenceandupdate() d) ssasetalgotopk...() with exactly same K Calling these functions will result in reuse of previously found basis. HANDLING OF DEGENERATE CASES Following degenerate cases may happen: * dataset is empty (no analysis can be done) * all sequences are shorter than the window length,no analysis can be done * no algorithm is specified (no analysis can be done) * data sequence is shorter than the WindowWidth (analysis can be done, but we can not perform forecasting on the last sequence) * window lentgh is 1 (impossible to use for forecasting) * SSA analysis algorithm is configured to extract basis whose size is equal to window length (impossible to use for forecasting; only basis whose size is less than window length can be used). Calling this function in degenerate cases returns following result: * ForecastLen copies of the last value is returned for non-empty task with large enough dataset, but with overcomplete basis (window width=1 or basis size is equal to window width) * zero trend with length=ForecastLen is returned for empty task No analysis is performed in degenerate cases (we immediately return dummy values, no basis is ever constructed). -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaforecastavgsequence(ssamodel &s, const real_1d_array &data, const ae_int_t datalen, const ae_int_t m, const ae_int_t forecastlen, const bool applysmoothing, real_1d_array &trend, const xparams _xparams = alglib::xdefault); void ssaforecastavgsequence(ssamodel &s, const real_1d_array &data, const ae_int_t m, const ae_int_t forecastlen, real_1d_array &trend, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function builds SSA basis and performs forecasting for a specified number of ticks, returning value of trend. Forecast is performed as follows: * SSA trend extraction is applied to last WindowWidth elements of the internally stored dataset; this step is basically a noise reduction. * linear recurrence relation is applied to extracted trend This function has following running time: * O(NBasis*WindowWidth) for trend extraction phase (always performed) * O(WindowWidth*NTicks) for forecast phase NOTE: noise reduction is ALWAYS applied by this algorithm; if you want to apply recurrence relation to raw unprocessed data, use another function - ssaforecastsequence() which allows to turn on and off noise reduction phase. NOTE: this algorithm performs prediction using only one - last - sliding window. Predictions produced by such approach are smooth continuations of the reconstructed trend line, but they can be easily corrupted by noise. If you need noise-resistant prediction, use ssaforecastavglast() function, which averages predictions built using several sliding windows. INPUT PARAMETERS: S - SSA model NTicks - number of ticks to forecast, NTicks>=1 OUTPUT PARAMETERS: Trend - array[NTicks], predicted trend line CACHING/REUSE OF THE BASIS Caching/reuse of previous results is performed: * first call performs full run of SSA; basis is stored in the cache * subsequent calls reuse previously cached basis * if you call any function which changes model properties (window length, algorithm, dataset), internal basis will be invalidated. * the only calls which do NOT invalidate basis are listed below: a) ssasetwindow() with same window length b) ssaappendpointandupdate() c) ssaappendsequenceandupdate() d) ssasetalgotopk...() with exactly same K Calling these functions will result in reuse of previously found basis. HANDLING OF DEGENERATE CASES Following degenerate cases may happen: * dataset is empty (no analysis can be done) * all sequences are shorter than the window length,no analysis can be done * no algorithm is specified (no analysis can be done) * last sequence is shorter than the WindowWidth (analysis can be done, but we can not perform forecasting on the last sequence) * window lentgh is 1 (impossible to use for forecasting) * SSA analysis algorithm is configured to extract basis whose size is equal to window length (impossible to use for forecasting; only basis whose size is less than window length can be used). Calling this function in degenerate cases returns following result: * NTicks copies of the last value is returned for non-empty task with large enough dataset, but with overcomplete basis (window width=1 or basis size is equal to window width) * zero trend with length=NTicks is returned for empty task No analysis is performed in degenerate cases (we immediately return dummy values, no basis is ever constructed). -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaforecastlast(ssamodel &s, const ae_int_t nticks, real_1d_array &trend, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function builds SSA basis and performs forecasting for a user- specified sequence, returning value of trend. Forecasting is done in two stages: * first, we extract trend from the WindowWidth last elements of the sequence. This stage is optional, you can turn it off if you pass data which are already processed with SSA. Of course, you can turn it off even for raw data, but it is not recommended - noise suppression is very important for correct prediction. * then, we apply LRR for last WindowWidth-1 elements of the extracted trend. This function has following running time: * O(NBasis*WindowWidth) for trend extraction phase * O(WindowWidth*NTicks) for forecast phase NOTE: this algorithm performs prediction using only one - last - sliding window. Predictions produced by such approach are smooth continuations of the reconstructed trend line, but they can be easily corrupted by noise. If you need noise-resistant prediction, use ssaforecastavgsequence() function, which averages predictions built using several sliding windows. INPUT PARAMETERS: S - SSA model Data - array[NTicks], data to forecast DataLen - number of ticks in the data, DataLen>=1 ForecastLen - number of ticks to predict, ForecastLen>=1 ApplySmoothing - whether to apply smoothing trend extraction or not; if you do not know what to specify, pass True. OUTPUT PARAMETERS: Trend - array[ForecastLen], forecasted trend CACHING/REUSE OF THE BASIS Caching/reuse of previous results is performed: * first call performs full run of SSA; basis is stored in the cache * subsequent calls reuse previously cached basis * if you call any function which changes model properties (window length, algorithm, dataset), internal basis will be invalidated. * the only calls which do NOT invalidate basis are listed below: a) ssasetwindow() with same window length b) ssaappendpointandupdate() c) ssaappendsequenceandupdate() d) ssasetalgotopk...() with exactly same K Calling these functions will result in reuse of previously found basis. HANDLING OF DEGENERATE CASES Following degenerate cases may happen: * dataset is empty (no analysis can be done) * all sequences are shorter than the window length,no analysis can be done * no algorithm is specified (no analysis can be done) * data sequence is shorter than the WindowWidth (analysis can be done, but we can not perform forecasting on the last sequence) * window lentgh is 1 (impossible to use for forecasting) * SSA analysis algorithm is configured to extract basis whose size is equal to window length (impossible to use for forecasting; only basis whose size is less than window length can be used). Calling this function in degenerate cases returns following result: * ForecastLen copies of the last value is returned for non-empty task with large enough dataset, but with overcomplete basis (window width=1 or basis size is equal to window width) * zero trend with length=ForecastLen is returned for empty task No analysis is performed in degenerate cases (we immediately return dummy values, no basis is ever constructed). -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssaforecastsequence(ssamodel &s, const real_1d_array &data, const ae_int_t datalen, const ae_int_t forecastlen, const bool applysmoothing, real_1d_array &trend, const xparams _xparams = alglib::xdefault); void ssaforecastsequence(ssamodel &s, const real_1d_array &data, const ae_int_t forecastlen, real_1d_array &trend, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function executes SSA on internally stored dataset and returns basis found by current method. INPUT PARAMETERS: S - SSA model OUTPUT PARAMETERS: A - array[WindowWidth,NBasis], basis; vectors are stored in matrix columns, by descreasing variance SV - array[NBasis]: * zeros - for model initialized with SSASetAlgoPrecomputed() * singular values - for other algorithms WindowWidth - current window NBasis - basis size CACHING/REUSE OF THE BASIS Caching/reuse of previous results is performed: * first call performs full run of SSA; basis is stored in the cache * subsequent calls reuse previously cached basis * if you call any function which changes model properties (window length, algorithm, dataset), internal basis will be invalidated. * the only calls which do NOT invalidate basis are listed below: a) ssasetwindow() with same window length b) ssaappendpointandupdate() c) ssaappendsequenceandupdate() d) ssasetalgotopk...() with exactly same K Calling these functions will result in reuse of previously found basis. HANDLING OF DEGENERATE CASES Calling this function in degenerate cases (no data or all data are shorter than window size; no algorithm is specified) returns basis with just one zero vector. -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssagetbasis(ssamodel &s, real_2d_array &a, real_1d_array &sv, ae_int_t &windowwidth, ae_int_t &nbasis, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function returns linear recurrence relation (LRR) coefficients found by current SSA algorithm. INPUT PARAMETERS: S - SSA model OUTPUT PARAMETERS: A - array[WindowWidth-1]. Coefficients of the linear recurrence of the form: X[W-1] = X[W-2]*A[W-2] + X[W-3]*A[W-3] + ... + X[0]*A[0]. Empty array for WindowWidth=1. WindowWidth - current window width CACHING/REUSE OF THE BASIS Caching/reuse of previous results is performed: * first call performs full run of SSA; basis is stored in the cache * subsequent calls reuse previously cached basis * if you call any function which changes model properties (window length, algorithm, dataset), internal basis will be invalidated. * the only calls which do NOT invalidate basis are listed below: a) ssasetwindow() with same window length b) ssaappendpointandupdate() c) ssaappendsequenceandupdate() d) ssasetalgotopk...() with exactly same K Calling these functions will result in reuse of previously found basis. HANDLING OF DEGENERATE CASES Calling this function in degenerate cases (no data or all data are shorter than window size; no algorithm is specified) returns zeros. -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssagetlrr(ssamodel &s, real_1d_array &a, ae_int_t &windowwidth, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets SSA algorithm to "precomputed vectors" algorithm. This algorithm uses precomputed set of orthonormal (orthogonal AND normalized) basis vectors supplied by user. Thus, basis calculation phase is not performed - we already have our basis - and only analysis/ forecasting phase requires actual calculations. This algorithm may handle "append" requests which add just one/few ticks to the end of the last sequence in O(1) time. NOTE: this algorithm accepts both basis and window width, because these two parameters are naturally aligned. Calling this function sets window width; if you call ssasetwindow() with other window width, then during analysis stage algorithm will detect conflict and reset to zero basis. INPUT PARAMETERS: S - SSA model A - array[WindowWidth,NBasis], orthonormalized basis; this function does NOT control orthogonality and does NOT perform any kind of renormalization. It is your responsibility to provide it with correct basis. WindowWidth - window width, >=1 NBasis - number of basis vectors, 1<=NBasis<=WindowWidth OUTPUT PARAMETERS: S - updated model NOTE: calling this function invalidates basis in all cases. -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssasetalgoprecomputed(ssamodel &s, const real_2d_array &a, const ae_int_t windowwidth, const ae_int_t nbasis, const xparams _xparams = alglib::xdefault); void ssasetalgoprecomputed(ssamodel &s, const real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets SSA algorithm to "direct top-K" algorithm. "Direct top-K" algorithm performs full SVD of the N*WINDOW trajectory matrix (hence its name - direct solver is used), then extracts top K components. Overall running time is O(N*WINDOW^2), where N is a number of ticks in the dataset, WINDOW is window width. This algorithm may handle "append" requests which add just one/few ticks to the end of the last sequence in O(WINDOW^3) time, which is ~N/WINDOW times faster than re-computing everything from scratch. INPUT PARAMETERS: S - SSA model TopK - number of components to analyze; TopK>=1. OUTPUT PARAMETERS: S - updated model NOTE: TopK>WindowWidth is silently decreased to WindowWidth during analysis phase NOTE: calling this function invalidates basis, except for the situation when this algorithm was already set with same parameters. -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssasetalgotopkdirect(ssamodel &s, const ae_int_t topk, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  

/************************************************************************* This function sets SSA algorithm to "top-K real time algorithm". This algo extracts K components with largest singular values. It is real-time version of top-K algorithm which is optimized for incremental processing and fast start-up. Internally it uses subspace eigensolver for truncated SVD. It results in ability to perform quick updates of the basis when only a few points/sequences is added to dataset. Performance profile of the algorithm is given below: * O(K*WindowWidth^2) running time for incremental update of the dataset with one of the "append-and-update" functions (ssaappendpointandupdate() or ssaappendsequenceandupdate()). * O(N*WindowWidth^2) running time for initial basis evaluation (N=size of dataset) * ability to split costly initialization across several incremental updates of the basis (so called "Power-Up" functionality, activated by ssasetpoweruplength() function) INPUT PARAMETERS: S - SSA model TopK - number of components to analyze; TopK>=1. OUTPUT PARAMETERS: S - updated model NOTE: this algorithm is optimized for large-scale tasks with large datasets. On toy problems with just 5-10 points it can return basis which is slightly different from that returned by direct algorithm (ssasetalgotopkdirect() function). However, the difference becomes negligible as dataset grows. NOTE: TopK>WindowWidth is silently decreased to WindowWidth during analysis phase NOTE: calling this function invalidates basis, except for the situation when this algorithm was already set with same parameters. -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssasetalgotopkrealtime(ssamodel &s, const ae_int_t topk, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets memory limit of SSA analysis. Straightforward SSA with sequence length T and window width W needs O(T*W) memory. It is possible to reduce memory consumption by splitting task into smaller chunks. Thus function allows you to specify approximate memory limit (measured in double precision numbers used for buffers). Actual memory consumption will be comparable to the number specified by you. Default memory limit is 50.000.000 (400Mbytes) in current version. INPUT PARAMETERS: S - SSA model MemLimit- memory limit, >=0. Zero value means no limit. -- ALGLIB -- Copyright 20.12.2017 by Bochkanov Sergey *************************************************************************/
void ssasetmemorylimit(ssamodel &s, const ae_int_t memlimit, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets length of power-up cycle for real-time algorithm. By default, this algorithm performs costly O(N*WindowWidth^2) init phase followed by full run of truncated EVD. However, if you are ready to live with a bit lower-quality basis during first few iterations, you can split this O(N*WindowWidth^2) initialization between several subsequent append-and-update rounds. It results in better latency of the algorithm. This function invalidates basis/solver, next analysis call will result in full recalculation of everything. INPUT PARAMETERS: S - SSA model PWLen - length of the power-up stage: * 0 means that no power-up is requested * 1 is the same as 0 * >1 means that delayed power-up is performed -- ALGLIB -- Copyright 03.11.2017 by Bochkanov Sergey *************************************************************************/
void ssasetpoweruplength(ssamodel &s, const ae_int_t pwlen, const xparams _xparams = alglib::xdefault);

Examples:   [1]  

/************************************************************************* This function sets seed which is used to initialize internal RNG when we make pseudorandom decisions on model updates. By default, deterministic seed is used - which results in same sequence of pseudorandom decisions every time you run SSA model. If you specify non- deterministic seed value, then SSA model may return slightly different results after each run. This function can be useful when you have several SSA models updated with sseappendpointandupdate() called with 0<UpdateIts<1 (fractional value) and due to performance limitations want them to perform updates at different moments. INPUT PARAMETERS: S - SSA model Seed - seed: * positive values = use deterministic seed for each run of algorithms which depend on random initialization * zero or negative values = use non-deterministic seed -- ALGLIB -- Copyright 03.11.2017 by Bochkanov Sergey *************************************************************************/
void ssasetseed(ssamodel &s, const ae_int_t seed, const xparams _xparams = alglib::xdefault);
/************************************************************************* This function sets window width for SSA model. You should call it before analysis phase. Default window width is 1 (not for real use). Special notes: * this function call can be performed at any moment before first call to analysis-related functions * changing window width invalidates internally stored basis; if you change window width AFTER you call analysis-related function, next analysis phase will require re-calculation of the basis according to current algorithm. * calling this function with exactly same window width as current one has no effect * if you specify window width larger than any data sequence stored in the model, analysis will return zero basis. INPUT PARAMETERS: S - SSA model created with ssacreate() WindowWidth - >=1, new window width OUTPUT PARAMETERS: S - SSA model, updated -- ALGLIB -- Copyright 30.10.2017 by Bochkanov Sergey *************************************************************************/
void ssasetwindow(ssamodel &s, const ae_int_t windowwidth, const xparams _xparams = alglib::xdefault);

Examples:   [1]  [2]  [3]  

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Here we demonstrate SSA trend/noise separation for some toy problem:
        // small monotonically growing series X are analyzed with 3-tick window
        // and "top-K" version of SSA, which selects K largest singular vectors
        // for analysis, with K=1.
        //
        ssamodel s;
        real_1d_array x = "[0,0.5,1,1,1.5,2]";

        //
        // First, we create SSA model, set its properties and add dataset.
        //
        // We use window with width=3 and configure model to use direct SSA
        // algorithm - one which runs exact O(N*W^2) analysis - to extract
        // one top singular vector. Well, it is toy problem :)
        //
        // NOTE: SSA model may store and analyze more than one sequence
        //       (say, different sequences may correspond to data collected
        //       from different devices)
        //
        ssacreate(s);
        ssasetwindow(s, 3);
        ssaaddsequence(s, x);
        ssasetalgotopkdirect(s, 1);

        //
        // Now we begin analysis. Internally SSA model stores everything it needs:
        // data, settings, solvers and so on. Right after first call to analysis-
        // related function it will analyze dataset, build basis and perform analysis.
        //
        // Subsequent calls to analysis functions will reuse previously computed
        // basis, unless you invalidate it by changing model settings (or dataset).
        //
        real_1d_array trend;
        real_1d_array noise;
        ssaanalyzesequence(s, x, trend, noise);
        printf("%s\n", trend.tostring(2).c_str()); // EXPECTED: [0.3815,0.5582,0.7810,1.0794,1.5041,2.0105]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Here we demonstrate SSA forecasting on some toy problem with clearly
        // visible linear trend and small amount of noise.
        //
        ssamodel s;
        real_1d_array x = "[0.05,0.96,2.04,3.11,3.97,5.03,5.98,7.02,8.02]";

        //
        // First, we create SSA model, set its properties and add dataset.
        //
        // We use window with width=3 and configure model to use direct SSA
        // algorithm - one which runs exact O(N*W^2) analysis - to extract
        // two top singular vectors. Well, it is toy problem :)
        //
        // NOTE: SSA model may store and analyze more than one sequence
        //       (say, different sequences may correspond to data collected
        //       from different devices)
        //
        ssacreate(s);
        ssasetwindow(s, 3);
        ssaaddsequence(s, x);
        ssasetalgotopkdirect(s, 2);

        //
        // Now we begin analysis. Internally SSA model stores everything it needs:
        // data, settings, solvers and so on. Right after first call to analysis-
        // related function it will analyze dataset, build basis and perform analysis.
        //
        // Subsequent calls to analysis functions will reuse previously computed
        // basis, unless you invalidate it by changing model settings (or dataset).
        //
        // In this example we show how to use ssaforecastlast() function, which
        // predicts changed in the last sequence of the dataset. If you want to
        // perform prediction for some other sequence, use ssaforecastsequence().
        //
        real_1d_array trend;
        ssaforecastlast(s, 3, trend);

        //
        // Well, we expected it to be [9,10,11]. There exists some difference,
        // which can be explained by the artificial noise in the dataset.
        //
        printf("%s\n", trend.tostring(2).c_str()); // EXPECTED: [9.0005,9.9322,10.8051]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

#include "stdafx.h"
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include "dataanalysis.h"

using namespace alglib;

int main(int argc, char **argv)
{
    try
    {
        //
        // Suppose that you have a constant stream of incoming data, and you want
        // to regularly perform singular spectral analysis of this stream.
        //
        // One full run of direct algorithm costs O(N*Width^2) operations, so
        // the more points you have, the more it costs to rebuild basis from
        // scratch.
        // 
        // Luckily we have incremental SSA algorithm which can perform quick
        // updates of already computed basis in O(K*Width^2) ops, where K
        // is a number of singular vectors extracted. Usually it is orders of
        // magnitude faster than full update of the basis.
        //
        // In this example we start from some initial dataset x0. Then we
        // start appending elements one by one to the end of the last sequence.
        //
        // NOTE: direct algorithm also supports incremental updates, but
        //       with O(Width^3) cost. Typically K<<Width, so specialized
        //       incremental algorithm is still faster.
        //
        ssamodel s1;
        real_2d_array a1;
        real_1d_array sv1;
        ae_int_t w;
        ae_int_t k;
        real_1d_array x0 = "[0.009,0.976,1.999,2.984,3.977,5.002]";
        ssacreate(s1);
        ssasetwindow(s1, 3);
        ssaaddsequence(s1, x0);

        // set algorithm to the real-time version of top-K, K=2
        ssasetalgotopkrealtime(s1, 2);

        // one more interesting feature of the incremental algorithm is "power-up" cycle.
        // even with incremental algorithm initial basis calculation costs O(N*Width^2) ops.
        // if such startup cost is too high for your real-time app, then you may divide
        // initial basis calculation across several model updates. It results in better
        // latency at the price of somewhat lesser precision during first few updates.
        ssasetpoweruplength(s1, 3);

        // now, after we prepared everything, start to add incoming points one by one;
        // in the real life, of course, we will perform some work between subsequent update
        // (analyze something, predict, and so on).
        //
        // After each append we perform one iteration of the real-time solver. Usually
        // one iteration is more than enough to update basis. If you have REALLY tight
        // performance constraints, you may specify fractional amount of iterations,
        // which means that iteration is performed with required probability.
        double updateits = 1.0;
        ssaappendpointandupdate(s1, 5.951, updateits);
        ssagetbasis(s1, a1, sv1, w, k);

        ssaappendpointandupdate(s1, 7.074, updateits);
        ssagetbasis(s1, a1, sv1, w, k);

        ssaappendpointandupdate(s1, 7.925, updateits);
        ssagetbasis(s1, a1, sv1, w, k);

        ssaappendpointandupdate(s1, 8.992, updateits);
        ssagetbasis(s1, a1, sv1, w, k);

        ssaappendpointandupdate(s1, 9.942, updateits);
        ssagetbasis(s1, a1, sv1, w, k);

        ssaappendpointandupdate(s1, 11.051, updateits);
        ssagetbasis(s1, a1, sv1, w, k);

        ssaappendpointandupdate(s1, 11.965, updateits);
        ssagetbasis(s1, a1, sv1, w, k);

        ssaappendpointandupdate(s1, 13.047, updateits);
        ssagetbasis(s1, a1, sv1, w, k);

        ssaappendpointandupdate(s1, 13.970, updateits);
        ssagetbasis(s1, a1, sv1, w, k);

        // Ok, we have our basis in a1[] and singular values at sv1[].
        // But is it good enough? Let's print it.
        printf("%s\n", a1.tostring(3).c_str()); // EXPECTED: [[0.510607,0.753611],[0.575201,0.058445],[0.639081,-0.654717]]

        // Ok, two vectors with 3 components each.
        // But how to understand that is it really good basis?
        // Let's compare it with direct SSA algorithm on the entire sequence.
        ssamodel s2;
        real_2d_array a2;
        real_1d_array sv2;
        real_1d_array x2 = "[0.009,0.976,1.999,2.984,3.977,5.002,5.951,7.074,7.925,8.992,9.942,11.051,11.965,13.047,13.970]";
        ssacreate(s2);
        ssasetwindow(s2, 3);
        ssaaddsequence(s2, x2);
        ssasetalgotopkdirect(s2, 2);
        ssagetbasis(s2, a2, sv2, w, k);

        // it is exactly the same as one calculated with incremental approach!
        printf("%s\n", a2.tostring(3).c_str()); // EXPECTED: [[0.510607,0.753611],[0.575201,0.058445],[0.639081,-0.654717]]
    }
    catch(alglib::ap_error alglib_exception)
    {
        printf("ALGLIB exception with message '%s'\n", alglib_exception.msg.c_str());
        return 1;
    }
    return 0;
}

onesamplesigntest
/************************************************************************* Sign test This test checks three hypotheses about the median of the given sample. The following tests are performed: * two-tailed test (null hypothesis - the median is equal to the given value) * left-tailed test (null hypothesis - the median is greater than or equal to the given value) * right-tailed test (null hypothesis - the median is less than or equal to the given value) Requirements: * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). The test is non-parametric and doesn't require distribution X to be normal Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. Median - assumed median value. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. While calculating p-values high-precision binomial distribution approximation is used, so significance levels have about 15 exact digits. -- ALGLIB -- Copyright 08.09.2006 by Bochkanov Sergey *************************************************************************/
void onesamplesigntest(const real_1d_array &x, const ae_int_t n, const double median, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
invstudenttdistribution
studenttdistribution
/************************************************************************* Functional inverse of Student's t distribution Given probability p, finds the argument t such that stdtr(k,t) is equal to p. ACCURACY: Tested at random 1 <= k <= 100. The "domain" refers to p: Relative error: arithmetic domain # trials peak rms IEEE .001,.999 25000 5.7e-15 8.0e-16 IEEE 10^-6,.001 25000 2.0e-12 2.9e-14 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double invstudenttdistribution(const ae_int_t k, const double p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Student's t distribution Computes the integral from minus infinity to t of the Student t distribution with integer k > 0 degrees of freedom: t - | | - | 2 -(k+1)/2 | ( (k+1)/2 ) | ( x ) ---------------------- | ( 1 + --- ) dx - | ( k ) sqrt( k pi ) | ( k/2 ) | | | - -inf. Relation to incomplete beta integral: 1 - stdtr(k,t) = 0.5 * incbet( k/2, 1/2, z ) where z = k/(k + t**2). For t < -2, this is the method of computation. For higher t, a direct method is derived from integration by parts. Since the function is symmetric about t=0, the area under the right tail of the density is found by calling the function with -t instead of t. ACCURACY: Tested at random 1 <= k <= 25. The "domain" refers to t. Relative error: arithmetic domain # trials peak rms IEEE -100,-2 50000 5.9e-15 1.4e-15 IEEE -2,100 500000 2.7e-15 4.9e-17 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 1995, 2000 by Stephen L. Moshier *************************************************************************/
double studenttdistribution(const ae_int_t k, const double t, const xparams _xparams = alglib::xdefault);
studentttest1
studentttest2
unequalvariancettest
/************************************************************************* One-sample t-test This test checks three hypotheses about the mean of the given sample. The following tests are performed: * two-tailed test (null hypothesis - the mean is equal to the given value) * left-tailed test (null hypothesis - the mean is greater than or equal to the given value) * right-tailed test (null hypothesis - the mean is less than or equal to the given value). The test is based on the assumption that a given sample has a normal distribution and an unknown dispersion. If the distribution sharply differs from normal, the test will work incorrectly. INPUT PARAMETERS: X - sample. Array whose index goes from 0 to N-1. N - size of sample, N>=0 Mean - assumed value of the mean. OUTPUT PARAMETERS: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. NOTE: this function correctly handles degenerate cases: * when N=0, all p-values are set to 1.0 * when variance of X[] is exactly zero, p-values are set to 1.0 or 0.0, depending on difference between sample mean and value of mean being tested. -- ALGLIB -- Copyright 08.09.2006 by Bochkanov Sergey *************************************************************************/
void studentttest1(const real_1d_array &x, const ae_int_t n, const double mean, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
/************************************************************************* Two-sample pooled test This test checks three hypotheses about the mean of the given samples. The following tests are performed: * two-tailed test (null hypothesis - the means are equal) * left-tailed test (null hypothesis - the mean of the first sample is greater than or equal to the mean of the second sample) * right-tailed test (null hypothesis - the mean of the first sample is less than or equal to the mean of the second sample). Test is based on the following assumptions: * given samples have normal distributions * dispersions are equal * samples are independent. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of sample. Y - sample 2. Array whose index goes from 0 to M-1. M - size of sample. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. NOTE: this function correctly handles degenerate cases: * when N=0 or M=0, all p-values are set to 1.0 * when both samples has exactly zero variance, p-values are set to 1.0 or 0.0, depending on difference between means. -- ALGLIB -- Copyright 18.09.2006 by Bochkanov Sergey *************************************************************************/
void studentttest2(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
/************************************************************************* Two-sample unpooled test This test checks three hypotheses about the mean of the given samples. The following tests are performed: * two-tailed test (null hypothesis - the means are equal) * left-tailed test (null hypothesis - the mean of the first sample is greater than or equal to the mean of the second sample) * right-tailed test (null hypothesis - the mean of the first sample is less than or equal to the mean of the second sample). Test is based on the following assumptions: * given samples have normal distributions * samples are independent. Equality of variances is NOT required. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. Y - sample 2. Array whose index goes from 0 to M-1. M - size of the sample. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. NOTE: this function correctly handles degenerate cases: * when N=0 or M=0, all p-values are set to 1.0 * when both samples has zero variance, p-values are set to 1.0 or 0.0, depending on difference between means. * when only one sample has zero variance, test reduces to 1-sample version. -- ALGLIB -- Copyright 18.09.2006 by Bochkanov Sergey *************************************************************************/
void unequalvariancettest(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
rmatrixsvd
/************************************************************************* Singular value decomposition of a rectangular matrix. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. The algorithm calculates the singular value decomposition of a matrix of size MxN: A = U * S * V^T The algorithm finds the singular values and, optionally, matrices U and V^T. The algorithm can find both first min(M,N) columns of matrix U and rows of matrix V^T (singular vectors), and matrices U and V^T wholly (of sizes MxM and NxN respectively). Take into account that the subroutine does not return matrix V but V^T. Input parameters: A - matrix to be decomposed. Array whose indexes range within [0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. UNeeded - 0, 1 or 2. See the description of the parameter U. VTNeeded - 0, 1 or 2. See the description of the parameter VT. AdditionalMemory - If the parameter: * equals 0, the algorithm doesn't use additional memory (lower requirements, lower performance). * equals 1, the algorithm uses additional memory of size min(M,N)*min(M,N) of real numbers. It often speeds up the algorithm. * equals 2, the algorithm uses additional memory of size M*min(M,N) of real numbers. It allows to get a maximum performance. The recommended value of the parameter is 2. Output parameters: W - contains singular values in descending order. U - if UNeeded=0, U isn't changed, the left singular vectors are not calculated. if Uneeded=1, U contains left singular vectors (first min(M,N) columns of matrix U). Array whose indexes range within [0..M-1, 0..Min(M,N)-1]. if UNeeded=2, U contains matrix U wholly. Array whose indexes range within [0..M-1, 0..M-1]. VT - if VTNeeded=0, VT isn't changed, the right singular vectors are not calculated. if VTNeeded=1, VT contains right singular vectors (first min(M,N) rows of matrix V^T). Array whose indexes range within [0..min(M,N)-1, 0..N-1]. if VTNeeded=2, VT contains matrix V^T wholly. Array whose indexes range within [0..N-1, 0..N-1]. -- ALGLIB -- Copyright 2005 by Bochkanov Sergey *************************************************************************/
bool rmatrixsvd(const real_2d_array &a, const ae_int_t m, const ae_int_t n, const ae_int_t uneeded, const ae_int_t vtneeded, const ae_int_t additionalmemory, real_1d_array &w, real_2d_array &u, real_2d_array &vt, const xparams _xparams = alglib::xdefault);
sparsedecompositionanalysis
cmatrixlu
hpdmatrixcholesky
rmatrixlu
sparsecholesky
sparsecholeskyanalyze
sparsecholeskyfactorize
sparsecholeskyp
sparsecholeskyreload
sparsecholeskyskyline
sparselu
spdmatrixcholesky
spdmatrixcholeskyupdateadd1
spdmatrixcholeskyupdateadd1buf
spdmatrixcholeskyupdatefix
spdmatrixcholeskyupdatefixbuf
/************************************************************************* An analysis of the sparse matrix decomposition, performed prior to actual numerical factorization. You should not directly access fields of this object - use appropriate ALGLIB functions to work with this object. *************************************************************************/
class sparsedecompositionanalysis { public: sparsedecompositionanalysis(); sparsedecompositionanalysis(const sparsedecompositionanalysis &rhs); sparsedecompositionanalysis& operator=(const sparsedecompositionanalysis &rhs); virtual ~sparsedecompositionanalysis(); };
/************************************************************************* LU decomposition of a general complex matrix with row pivoting A is represented as A = P*L*U, where: * L is lower unitriangular matrix * U is upper triangular matrix * P = P0*P1*...*PK, K=min(M,N)-1, Pi - permutation matrix for I and Pivots[I] INPUT PARAMETERS: A - array[0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. OUTPUT PARAMETERS: A - matrices L and U in compact form: * L is stored under main diagonal * U is stored on and above main diagonal Pivots - permutation matrix in compact form. array[0..Min(M-1,N-1)]. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 10.01.2010 Bochkanov Sergey *************************************************************************/
void cmatrixlu(complex_2d_array &a, const ae_int_t m, const ae_int_t n, integer_1d_array &pivots, const xparams _xparams = alglib::xdefault); void cmatrixlu(complex_2d_array &a, integer_1d_array &pivots, const xparams _xparams = alglib::xdefault);
/************************************************************************* Cache-oblivious Cholesky decomposition The algorithm computes Cholesky decomposition of a Hermitian positive- definite matrix. The result of an algorithm is a representation of A as A=U'*U or A=L*L' (here X' denotes conj(X^T)). INPUT PARAMETERS: A - upper or lower triangle of a factorized matrix. array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - if IsUpper=True, then A contains an upper triangle of a symmetric matrix, otherwise A contains a lower one. OUTPUT PARAMETERS: A - the result of factorization. If IsUpper=True, then the upper triangle contains matrix U, so that A = U'*U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. RESULT: If the matrix is positive-definite, the function returns True. Otherwise, the function returns False. Contents of A is not determined in such case. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 15.12.2009-22.01.2018 Bochkanov Sergey *************************************************************************/
bool hpdmatrixcholesky(complex_2d_array &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault); bool hpdmatrixcholesky(complex_2d_array &a, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* LU decomposition of a general real matrix with row pivoting A is represented as A = P*L*U, where: * L is lower unitriangular matrix * U is upper triangular matrix * P = P0*P1*...*PK, K=min(M,N)-1, Pi - permutation matrix for I and Pivots[I] INPUT PARAMETERS: A - array[0..M-1, 0..N-1]. M - number of rows in matrix A. N - number of columns in matrix A. OUTPUT PARAMETERS: A - matrices L and U in compact form: * L is stored under main diagonal * U is stored on and above main diagonal Pivots - permutation matrix in compact form. array[0..Min(M-1,N-1)]. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 10.01.2010 Bochkanov Sergey *************************************************************************/
void rmatrixlu(real_2d_array &a, const ae_int_t m, const ae_int_t n, integer_1d_array &pivots, const xparams _xparams = alglib::xdefault); void rmatrixlu(real_2d_array &a, integer_1d_array &pivots, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse Cholesky decomposition for a matrix stored in any sparse storage, without rows/cols permutation. This function is the most convenient (less parameters to specify), although the less efficient, version of sparse Cholesky. IMPORTANT: the commercial edition of ALGLIB can parallelize this function. Specific speed-up due to parallelism heavily depends on a sparsity pattern, with the following matrix classes being the easiest ones to parallelize: * large matrices with many nearly-independent sets of rows/cols * matrices with large dense blocks on the diagonal See the ALGLIB Reference Manual for more information on how to activate parallelism support. Internally it: * calls SparseCholeskyAnalyze() function to perform symbolic analysis phase with no permutation being configured. * calls SparseCholeskyFactorize() function to perform numerical phase of the factorization Following alternatives may result in better performance: * using SparseCholeskyP(), which selects best pivoting available, which almost always results in improved sparsity and cache locality * using SparseCholeskyAnalyze() and SparseCholeskyFactorize() functions directly, which may improve performance of repetitive factorizations with same sparsity patterns. The latter also allows one to perform LDLT factorization of indefinite matrix (one with strictly diagonal D, which is known to be stable only in few special cases, like quasi-definite matrices). INPUT PARAMETERS: A - a square NxN sparse matrix, stored in any storage format. IsUpper - if IsUpper=True, then factorization is performed on upper triangle. Another triangle is ignored on input, dropped on output. Similarly, if IsUpper=False, the lower triangle is processed. OUTPUT PARAMETERS: A - the result of factorization, stored in CRS format: * if IsUpper=True, then the upper triangle contains matrix U such that A = U^T*U and the lower triangle is empty. * similarly, if IsUpper=False, then lower triangular L is returned and we have A = L*(L^T). Note that THIS function does not perform permutation of the rows to reduce fill-in. RESULT: If the matrix is positive-definite, the function returns True. Otherwise, the function returns False. Contents of A is undefined in such case. NOTE: for performance reasons this function does NOT check that input matrix includes only finite values. It is your responsibility to make sure that there are no infinite or NAN values in the matrix. -- ALGLIB routine -- 16.09.2020 Bochkanov Sergey *************************************************************************/
bool sparsecholesky(sparsematrix &a, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse Cholesky/LDLT decomposition: symbolic analysis phase. This function is a part of the 'expert' sparse Cholesky API: * SparseCholeskyAnalyze(), that performs symbolic analysis phase and loads matrix to be factorized into internal storage * SparseCholeskySetModType(), that allows to use modified Cholesky/LDLT with lower bounds on pivot magnitudes and additional overflow safeguards * SparseCholeskyFactorize(), that performs numeric factorization using precomputed symbolic analysis and internally stored matrix - and outputs result * SparseCholeskyReload(), that reloads one more matrix with same sparsity pattern into internal storage so one may reuse previously allocated temporaries and previously performed symbolic analysis This specific function performs preliminary analysis of the Cholesky/LDLT factorization. It allows to choose different permutation types and to choose between classic Cholesky and indefinite LDLT factorization (the latter is computed with strictly diagonal D, i.e. without Bunch-Kauffman pivoting). NOTE: L*D*LT family of factorization may be used to factorize indefinite matrices. However, numerical stability is guaranteed ONLY for a class of quasi-definite matrices. NOTE: all internal processing is performed with lower triangular matrices stored in CRS format. Any other storage formats and/or upper triangular storage means that one format conversion and/or one transposition will be performed internally for the analysis and factorization phases. Thus, highest performance is achieved when input is a lower triangular CRS matrix. INPUT PARAMETERS: A - sparse square matrix in any sparse storage format. IsUpper - whether upper or lower triangle is decomposed (the other one is ignored). FactType - factorization type: * 0 for traditional Cholesky of SPD matrix * 1 for LDLT decomposition with strictly diagonal D, which may have non-positive entries. PermType - permutation type: *-1 for absence of permutation * 0 for best fill-in reducing permutation available, which is 3 in the current version * 1 for supernodal ordering (improves locality and performance, does NOT change fill-in factor) * 2 for original AMD ordering * 3 for improved AMD (approximate minimum degree) ordering with better handling of matrices with dense rows/columns OUTPUT PARAMETERS: Analysis - contains: * symbolic analysis of the matrix structure which will be used later to guide numerical factorization. * specific numeric values loaded into internal memory waiting for the factorization to be performed This function fails if and only if the matrix A is symbolically degenerate i.e. has diagonal element which is exactly zero. In such case False is returned, contents of Analysis object is undefined. -- ALGLIB routine -- 20.09.2020 Bochkanov Sergey *************************************************************************/
bool sparsecholeskyanalyze(const sparsematrix &a, const bool isupper, const ae_int_t facttype, const ae_int_t permtype, sparsedecompositionanalysis &analysis, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse Cholesky decomposition: numerical analysis phase. IMPORTANT: the commercial edition of ALGLIB can parallelize this function. Specific speed-up due to parallelism heavily depends on a sparsity pattern, with the following matrix classes being the easiest ones to parallelize: * large matrices with many nearly-independent sets of rows/cols * matrices with large dense blocks on the diagonal See the ALGLIB Reference Manual for more information on how to activate parallelism support. This function is a part of the 'expert' sparse Cholesky API: * SparseCholeskyAnalyze(), that performs symbolic analysis phase and loads matrix to be factorized into internal storage * SparseCholeskySetModType(), that allows to use modified Cholesky/LDLT with lower bounds on pivot magnitudes and additional overflow safeguards * SparseCholeskyFactorize(), that performs numeric factorization using precomputed symbolic analysis and internally stored matrix - and outputs result * SparseCholeskyReload(), that reloads one more matrix with same sparsity pattern into internal storage so one may reuse previously allocated temporaries and previously performed symbolic analysis Depending on settings specified during SparseCholeskyAnalyze() call it may produce classic Cholesky or L*D*LT decomposition (with strictly diagonal D), without permutation or with performance-enhancing permutation P. NOTE: all internal processing is performed with lower triangular matrices stored in CRS format. Any other storage formats and/or upper triangular storage means that one format conversion and/or one transposition will be performed internally for the analysis and factorization phases. Thus, highest performance is achieved when input is a lower triangular CRS matrix, and lower triangular output is requested. NOTE: L*D*LT family of factorization may be used to factorize indefinite matrices. However, numerical stability is guaranteed ONLY for a class of quasi-definite matrices. INPUT PARAMETERS: Analysis - prior analysis with internally stored matrix which will be factorized NeedUpper - whether upper triangular or lower triangular output is needed OUTPUT PARAMETERS: A - Cholesky decomposition of A stored in lower triangular CRS format, i.e. A=L*L' (or upper triangular CRS, with A=U'*U, depending on NeedUpper parameter). D - array[N], diagonal factor. If no diagonal factor was required during analysis phase, still returned but filled with 1's P - array[N], pivots. Permutation matrix P is a product of P(0)*P(1)*...*P(N-1), where P(i) is a permutation of row/col I and P[I] (with P[I]>=I). If no permutation was requested during analysis phase, still returned but filled with identity permutation. The function returns True when factorization resulted in nondegenerate matrix. False is returned when factorization fails (Cholesky factorization of indefinite matrix) or LDLT factorization has exactly zero elements at the diagonal. In the latter case contents of A, D and P is undefined. The analysis object is not changed during the factorization. Subsequent calls to SparseCholeskyFactorize() will result in same factorization being performed one more time. -- ALGLIB routine -- 20.09.2020 Bochkanov Sergey *************************************************************************/
bool sparsecholeskyfactorize(sparsedecompositionanalysis &analysis, const bool needupper, sparsematrix &a, real_1d_array &d, integer_1d_array &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse Cholesky decomposition for a matrix stored in any sparse storage format, with performance-enhancing permutation of rows/cols. Present version is configured to perform supernodal permutation with a sparsity reducing ordering. IMPORTANT: the commercial edition of ALGLIB can parallelize this function. Specific speed-up due to parallelism heavily depends on a sparsity pattern, with the following matrix classes being the easiest ones to parallelize: * large matrices with many nearly-independent sets of rows/cols * matrices with large dense blocks on the diagonal See the ALGLIB Reference Manual for more information on how to activate parallelism support. This function is a wrapper around generic sparse decomposition functions that internally: * calls SparseCholeskyAnalyze() function to perform symbolic analysis phase with best available permutation being configured. * calls SparseCholeskyFactorize() function to perform numerical phase of the factorization. NOTE: using SparseCholeskyAnalyze() and SparseCholeskyFactorize() directly may improve performance of repetitive factorizations with same sparsity patterns. It also allows one to perform LDLT factorization of indefinite matrix - a factorization with strictly diagonal D, which is known to be stable only in few special cases, like quasi- definite matrices. INPUT PARAMETERS: A - a square NxN sparse matrix, stored in any storage format. IsUpper - if IsUpper=True, then factorization is performed on upper triangle. Another triangle is ignored on input, dropped on output. Similarly, if IsUpper=False, the lower triangle is processed. OUTPUT PARAMETERS: A - the result of factorization, stored in CRS format: * if IsUpper=True, then the upper triangle contains matrix U such that A = U^T*U and the lower triangle is empty. * similarly, if IsUpper=False, then lower triangular L is returned and we have A = L*(L^T). P - a row/column permutation, a product of P0*P1*...*Pk, k=N-1, with Pi being permutation of rows/cols I and P[I] RESULT: If the matrix is positive-definite, the function returns True. Otherwise, the function returns False. Contents of A is undefined in such case. NOTE: for performance reasons this function does NOT check that input matrix includes only finite values. It is your responsibility to make sure that there are no infinite or NAN values in the matrix. -- ALGLIB routine -- 16.09.2020 Bochkanov Sergey *************************************************************************/
bool sparsecholeskyp(sparsematrix &a, const bool isupper, integer_1d_array &p, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse Cholesky decomposition: update internally stored matrix with another one with exactly same sparsity pattern. This function is a part of the 'expert' sparse Cholesky API: * SparseCholeskyAnalyze(), that performs symbolic analysis phase and loads matrix to be factorized into internal storage * SparseCholeskySetModType(), that allows to use modified Cholesky/LDLT with lower bounds on pivot magnitudes and additional overflow safeguards * SparseCholeskyFactorize(), that performs numeric factorization using precomputed symbolic analysis and internally stored matrix - and outputs result * SparseCholeskyReload(), that reloads one more matrix with same sparsity pattern into internal storage so one may reuse previously allocated temporaries and previously performed symbolic analysis This specific function replaces internally stored numerical values with ones from another sparse matrix (but having exactly same sparsity pattern as one that was used for initial SparseCholeskyAnalyze() call). NOTE: all internal processing is performed with lower triangular matrices stored in CRS format. Any other storage formats and/or upper triangular storage means that one format conversion and/or one transposition will be performed internally for the analysis and factorization phases. Thus, highest performance is achieved when input is a lower triangular CRS matrix. INPUT PARAMETERS: Analysis - analysis object A - sparse square matrix in any sparse storage format. It MUST have exactly same sparsity pattern as that of the matrix that was passed to SparseCholeskyAnalyze(). Any difference (missing elements or additional elements) may result in unpredictable and undefined behavior - an algorithm may fail due to memory access violation. IsUpper - whether upper or lower triangle is decomposed (the other one is ignored). OUTPUT PARAMETERS: Analysis - contains: * symbolic analysis of the matrix structure which will be used later to guide numerical factorization. * specific numeric values loaded into internal memory waiting for the factorization to be performed -- ALGLIB routine -- 20.09.2020 Bochkanov Sergey *************************************************************************/
void sparsecholeskyreload(sparsedecompositionanalysis &analysis, const sparsematrix &a, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse Cholesky decomposition for skyline matrixm using in-place algorithm without allocating additional storage. The algorithm computes Cholesky decomposition of a symmetric positive- definite sparse matrix. The result of an algorithm is a representation of A as A=U^T*U or A=L*L^T This function allows to perform very efficient decomposition of low-profile matrices (average bandwidth is ~5-10 elements). For larger matrices it is recommended to use supernodal Cholesky decomposition: SparseCholeskyP() or SparseCholeskyAnalyze()/SparseCholeskyFactorize(). INPUT PARAMETERS: A - sparse matrix in skyline storage (SKS) format. N - size of matrix A (can be smaller than actual size of A) IsUpper - if IsUpper=True, then factorization is performed on upper triangle. Another triangle is ignored (it may contant some data, but it is not changed). OUTPUT PARAMETERS: A - the result of factorization, stored in SKS. If IsUpper=True, then the upper triangle contains matrix U, such that A = U^T*U. Lower triangle is not changed. Similarly, if IsUpper = False. In this case L is returned, and we have A = L*(L^T). Note that THIS function does not perform permutation of rows to reduce bandwidth. RESULT: If the matrix is positive-definite, the function returns True. Otherwise, the function returns False. Contents of A is not determined in such case. NOTE: for performance reasons this function does NOT check that input matrix includes only finite values. It is your responsibility to make sure that there are no infinite or NAN values in the matrix. -- ALGLIB routine -- 16.01.2014 Bochkanov Sergey *************************************************************************/
bool sparsecholeskyskyline(sparsematrix &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sparse LU decomposition with column pivoting for sparsity and row pivoting for stability. Input must be square sparse matrix stored in CRS format. The algorithm computes LU decomposition of a general square matrix (rectangular ones are not supported). The result of an algorithm is a representation of A as A = P*L*U*Q, where: * L is lower unitriangular matrix * U is upper triangular matrix * P = P0*P1*...*PK, K=N-1, Pi - permutation matrix for I and P[I] * Q = QK*...*Q1*Q0, K=N-1, Qi - permutation matrix for I and Q[I] This function pivots columns for higher sparsity, and then pivots rows for stability (larger element at the diagonal). INPUT PARAMETERS: A - sparse NxN matrix in CRS format. An exception is generated if matrix is non-CRS or non-square. PivotType- pivoting strategy: * 0 for best pivoting available (2 in current version) * 1 for row-only pivoting (NOT RECOMMENDED) * 2 for complete pivoting which produces most sparse outputs OUTPUT PARAMETERS: A - the result of factorization, matrices L and U stored in compact form using CRS sparse storage format: * lower unitriangular L is stored strictly under main diagonal * upper triangilar U is stored ON and ABOVE main diagonal P - row permutation matrix in compact form, array[N] Q - col permutation matrix in compact form, array[N] This function always succeeds, i.e. it ALWAYS returns valid factorization, but for your convenience it also returns boolean value which helps to detect symbolically degenerate matrices: * function returns TRUE, if the matrix was factorized AND symbolically non-degenerate * function returns FALSE, if the matrix was factorized but U has strictly zero elements at the diagonal (the factorization is returned anyway). -- ALGLIB routine -- 03.09.2018 Bochkanov Sergey *************************************************************************/
bool sparselu(sparsematrix &a, const ae_int_t pivottype, integer_1d_array &p, integer_1d_array &q, const xparams _xparams = alglib::xdefault);
/************************************************************************* Cache-oblivious Cholesky decomposition The algorithm computes Cholesky decomposition of a symmetric positive- definite matrix. The result of an algorithm is a representation of A as A=U^T*U or A=L*L^T INPUT PARAMETERS: A - upper or lower triangle of a factorized matrix. array with elements [0..N-1, 0..N-1]. N - size of matrix A. IsUpper - if IsUpper=True, then A contains an upper triangle of a symmetric matrix, otherwise A contains a lower one. OUTPUT PARAMETERS: A - the result of factorization. If IsUpper=True, then the upper triangle contains matrix U, so that A = U^T*U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. RESULT: If the matrix is positive-definite, the function returns True. Otherwise, the function returns False. Contents of A is not determined in such case. ! FREE EDITION OF ALGLIB: ! ! Free Edition of ALGLIB supports following important features for this ! function: ! * C++ version: x64 SIMD support using C++ intrinsics ! * C# version: x64 SIMD support using NET5/NetCore hardware intrinsics ! ! We recommend you to read 'Compiling ALGLIB' section of the ALGLIB ! Reference Manual in order to find out how to activate SIMD support ! in ALGLIB. ! COMMERCIAL EDITION OF ALGLIB: ! ! Commercial Edition of ALGLIB includes following important improvements ! of this function: ! * high-performance native backend with same C# interface (C# version) ! * multithreading support (C++ and C# versions) ! * hardware vendor (Intel) implementations of linear algebra primitives ! (C++ and C# versions, x86/x64 platform) ! ! We recommend you to read 'Working with commercial version' section of ! ALGLIB Reference Manual in order to find out how to use performance- ! related features provided by commercial edition of ALGLIB. -- ALGLIB routine -- 15.12.2009 Bochkanov Sergey *************************************************************************/
bool spdmatrixcholesky(real_2d_array &a, const ae_int_t n, const bool isupper, const xparams _xparams = alglib::xdefault); bool spdmatrixcholesky(real_2d_array &a, const bool isupper, const xparams _xparams = alglib::xdefault);
/************************************************************************* Update of Cholesky decomposition: rank-1 update to original A. "Buffered" version which uses preallocated buffer which is saved between subsequent function calls. This function uses internally allocated buffer which is not saved between subsequent calls. So, if you perform a lot of subsequent updates, we recommend you to use "buffered" version of this function: SPDMatrixCholeskyUpdateAdd1Buf(). INPUT PARAMETERS: A - upper or lower Cholesky factor. array with elements [0..N-1, 0..N-1]. Exception is thrown if array size is too small. N - size of matrix A, N>0 IsUpper - if IsUpper=True, then A contains upper Cholesky factor; otherwise A contains a lower one. U - array[N], rank-1 update to A: A_mod = A + u*u' Exception is thrown if array size is too small. OUTPUT PARAMETERS: A - updated factorization. If IsUpper=True, then the upper triangle contains matrix U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. NOTE: this function always succeeds, so it does not return completion code NOTE: this function checks sizes of input arrays, but it does NOT checks for presence of infinities or NAN's. -- ALGLIB -- 03.02.2014 Sergey Bochkanov *************************************************************************/
void spdmatrixcholeskyupdateadd1(real_2d_array &a, const ae_int_t n, const bool isupper, const real_1d_array &u, const xparams _xparams = alglib::xdefault); void spdmatrixcholeskyupdateadd1(real_2d_array &a, const bool isupper, const real_1d_array &u, const xparams _xparams = alglib::xdefault);
/************************************************************************* Update of Cholesky decomposition: rank-1 update to original A. "Buffered" version which uses preallocated buffer which is saved between subsequent function calls. See comments for SPDMatrixCholeskyUpdateAdd1() for more information. INPUT PARAMETERS: A - upper or lower Cholesky factor. array with elements [0..N-1, 0..N-1]. Exception is thrown if array size is too small. N - size of matrix A, N>0 IsUpper - if IsUpper=True, then A contains upper Cholesky factor; otherwise A contains a lower one. U - array[N], rank-1 update to A: A_mod = A + u*u' Exception is thrown if array size is too small. BufR - possibly preallocated buffer; automatically resized if needed. It is recommended to reuse this buffer if you perform a lot of subsequent decompositions. OUTPUT PARAMETERS: A - updated factorization. If IsUpper=True, then the upper triangle contains matrix U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. -- ALGLIB -- 03.02.2014 Sergey Bochkanov *************************************************************************/
void spdmatrixcholeskyupdateadd1buf(real_2d_array &a, const ae_int_t n, const bool isupper, const real_1d_array &u, real_1d_array &bufr, const xparams _xparams = alglib::xdefault);
/************************************************************************* Update of Cholesky decomposition: "fixing" some variables. This function uses internally allocated buffer which is not saved between subsequent calls. So, if you perform a lot of subsequent updates, we recommend you to use "buffered" version of this function: SPDMatrixCholeskyUpdateFixBuf(). "FIXING" EXPLAINED: Suppose we have N*N positive definite matrix A. "Fixing" some variable means filling corresponding row/column of A by zeros, and setting diagonal element to 1. For example, if we fix 2nd variable in 4*4 matrix A, it becomes Af: ( A00 A01 A02 A03 ) ( Af00 0 Af02 Af03 ) ( A10 A11 A12 A13 ) ( 0 1 0 0 ) ( A20 A21 A22 A23 ) => ( Af20 0 Af22 Af23 ) ( A30 A31 A32 A33 ) ( Af30 0 Af32 Af33 ) If we have Cholesky decomposition of A, it must be recalculated after variables were fixed. However, it is possible to use efficient algorithm, which needs O(K*N^2) time to "fix" K variables, given Cholesky decomposition of original, "unfixed" A. INPUT PARAMETERS: A - upper or lower Cholesky factor. array with elements [0..N-1, 0..N-1]. Exception is thrown if array size is too small. N - size of matrix A, N>0 IsUpper - if IsUpper=True, then A contains upper Cholesky factor; otherwise A contains a lower one. Fix - array[N], I-th element is True if I-th variable must be fixed. Exception is thrown if array size is too small. BufR - possibly preallocated buffer; automatically resized if needed. It is recommended to reuse this buffer if you perform a lot of subsequent decompositions. OUTPUT PARAMETERS: A - updated factorization. If IsUpper=True, then the upper triangle contains matrix U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. NOTE: this function always succeeds, so it does not return completion code NOTE: this function checks sizes of input arrays, but it does NOT checks for presence of infinities or NAN's. NOTE: this function is efficient only for moderate amount of updated variables - say, 0.1*N or 0.3*N. For larger amount of variables it will still work, but you may get better performance with straightforward Cholesky. -- ALGLIB -- 03.02.2014 Sergey Bochkanov *************************************************************************/
void spdmatrixcholeskyupdatefix(real_2d_array &a, const ae_int_t n, const bool isupper, const boolean_1d_array &fix, const xparams _xparams = alglib::xdefault); void spdmatrixcholeskyupdatefix(real_2d_array &a, const bool isupper, const boolean_1d_array &fix, const xparams _xparams = alglib::xdefault);
/************************************************************************* Update of Cholesky decomposition: "fixing" some variables. "Buffered" version which uses preallocated buffer which is saved between subsequent function calls. See comments for SPDMatrixCholeskyUpdateFix() for more information. INPUT PARAMETERS: A - upper or lower Cholesky factor. array with elements [0..N-1, 0..N-1]. Exception is thrown if array size is too small. N - size of matrix A, N>0 IsUpper - if IsUpper=True, then A contains upper Cholesky factor; otherwise A contains a lower one. Fix - array[N], I-th element is True if I-th variable must be fixed. Exception is thrown if array size is too small. BufR - possibly preallocated buffer; automatically resized if needed. It is recommended to reuse this buffer if you perform a lot of subsequent decompositions. OUTPUT PARAMETERS: A - updated factorization. If IsUpper=True, then the upper triangle contains matrix U, and the elements below the main diagonal are not modified. Similarly, if IsUpper = False. -- ALGLIB -- 03.02.2014 Sergey Bochkanov *************************************************************************/
void spdmatrixcholeskyupdatefixbuf(real_2d_array &a, const ae_int_t n, const bool isupper, const boolean_1d_array &fix, real_1d_array &bufr, const xparams _xparams = alglib::xdefault);
hyperbolicsinecosineintegrals
sinecosineintegrals
/************************************************************************* Hyperbolic sine and cosine integrals Approximates the integrals x - | | cosh t - 1 Chi(x) = eul + ln x + | ----------- dt, | | t - 0 x - | | sinh t Shi(x) = | ------ dt | | t - 0 where eul = 0.57721566490153286061 is Euler's constant. The integrals are evaluated by power series for x < 8 and by Chebyshev expansions for x between 8 and 88. For large x, both functions approach exp(x)/2x. Arguments greater than 88 in magnitude return MAXNUM. ACCURACY: Test interval 0 to 88. Relative error: arithmetic function # trials peak rms IEEE Shi 30000 6.9e-16 1.6e-16 Absolute error, except relative when |Chi| > 1: IEEE Chi 30000 8.4e-16 1.4e-16 Cephes Math Library Release 2.8: June, 2000 Copyright 1984, 1987, 2000 by Stephen L. Moshier *************************************************************************/
void hyperbolicsinecosineintegrals(const double x, double &shi, double &chi, const xparams _xparams = alglib::xdefault);
/************************************************************************* Sine and cosine integrals Evaluates the integrals x - | cos t - 1 Ci(x) = eul + ln x + | --------- dt, | t - 0 x - | sin t Si(x) = | ----- dt | t - 0 where eul = 0.57721566490153286061 is Euler's constant. The integrals are approximated by rational functions. For x > 8 auxiliary functions f(x) and g(x) are employed such that Ci(x) = f(x) sin(x) - g(x) cos(x) Si(x) = pi/2 - f(x) cos(x) - g(x) sin(x) ACCURACY: Test interval = [0,50]. Absolute error, except relative when > 1: arithmetic function # trials peak rms IEEE Si 30000 4.4e-16 7.3e-17 IEEE Ci 30000 6.9e-16 5.1e-17 Cephes Math Library Release 2.1: January, 1989 Copyright 1984, 1987, 1989 by Stephen L. Moshier *************************************************************************/
void sinecosineintegrals(const double x, double &si, double &ci, const xparams _xparams = alglib::xdefault);
ftest
onesamplevariancetest
/************************************************************************* Two-sample F-test This test checks three hypotheses about dispersions of the given samples. The following tests are performed: * two-tailed test (null hypothesis - the dispersions are equal) * left-tailed test (null hypothesis - the dispersion of the first sample is greater than or equal to the dispersion of the second sample). * right-tailed test (null hypothesis - the dispersion of the first sample is less than or equal to the dispersion of the second sample) The test is based on the following assumptions: * the given samples have normal distributions * the samples are independent. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - sample size. Y - sample 2. Array whose index goes from 0 to M-1. M - sample size. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 19.09.2006 by Bochkanov Sergey *************************************************************************/
void ftest(const real_1d_array &x, const ae_int_t n, const real_1d_array &y, const ae_int_t m, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
/************************************************************************* One-sample chi-square test This test checks three hypotheses about the dispersion of the given sample The following tests are performed: * two-tailed test (null hypothesis - the dispersion equals the given number) * left-tailed test (null hypothesis - the dispersion is greater than or equal to the given number) * right-tailed test (null hypothesis - dispersion is less than or equal to the given number). Test is based on the following assumptions: * the given sample has a normal distribution. Input parameters: X - sample 1. Array whose index goes from 0 to N-1. N - size of the sample. Variance - dispersion value to compare with. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. -- ALGLIB -- Copyright 19.09.2006 by Bochkanov Sergey *************************************************************************/
void onesamplevariancetest(const real_1d_array &x, const ae_int_t n, const double variance, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
wilcoxonsignedranktest
/************************************************************************* Wilcoxon signed-rank test This test checks three hypotheses about the median of the given sample. The following tests are performed: * two-tailed test (null hypothesis - the median is equal to the given value) * left-tailed test (null hypothesis - the median is greater than or equal to the given value) * right-tailed test (null hypothesis - the median is less than or equal to the given value) Requirements: * the scale of measurement should be ordinal, interval or ratio (i.e. the test could not be applied to nominal variables). * the distribution should be continuous and symmetric relative to its median. * number of distinct values in the X array should be greater than 4 The test is non-parametric and doesn't require distribution X to be normal Input parameters: X - sample. Array whose index goes from 0 to N-1. N - size of the sample. Median - assumed median value. Output parameters: BothTails - p-value for two-tailed test. If BothTails is less than the given significance level the null hypothesis is rejected. LeftTail - p-value for left-tailed test. If LeftTail is less than the given significance level, the null hypothesis is rejected. RightTail - p-value for right-tailed test. If RightTail is less than the given significance level the null hypothesis is rejected. To calculate p-values, special approximation is used. This method lets us calculate p-values with two decimal places in interval [0.0001, 1]. "Two decimal places" does not sound very impressive, but in practice the relative error of less than 1% is enough to make a decision. There is no approximation outside the [0.0001, 1] interval. Therefore, if the significance level outlies this interval, the test returns 0.0001. -- ALGLIB -- Copyright 08.09.2006 by Bochkanov Sergey *************************************************************************/
void wilcoxonsignedranktest(const real_1d_array &x, const ae_int_t n, const double e, double &bothtails, double &lefttail, double &righttail, const xparams _xparams = alglib::xdefault);
xdebugrecord1
xdebugb1appendcopy
xdebugb1count
xdebugb1not
xdebugb1outeven
xdebugb2count
xdebugb2not
xdebugb2outsin
xdebugb2transpose
xdebugc1appendcopy
xdebugc1neg
xdebugc1outeven
xdebugc1sum
xdebugc2neg
xdebugc2outsincos
xdebugc2sum
xdebugc2transpose
xdebugi1appendcopy
xdebugi1neg
xdebugi1outeven
xdebugi1sum
xdebugi2neg
xdebugi2outsin
xdebugi2sum
xdebugi2transpose
xdebuginitrecord1
xdebugmaskedbiasedproductsum
xdebugr1appendcopy
xdebugr1internalcopyandsum
xdebugr1neg
xdebugr1outeven
xdebugr1sum
xdebugr2internalcopyandsum
xdebugr2neg
xdebugr2outsin
xdebugr2sum
xdebugr2transpose
xdebugupdaterecord1
/************************************************************************* This is a debug class intended for testing ALGLIB interface generator. Never use it in any real life project. -- ALGLIB -- Copyright 20.07.2021 by Bochkanov Sergey *************************************************************************/
class xdebugrecord1 { public: xdebugrecord1(); xdebugrecord1(const xdebugrecord1 &rhs); xdebugrecord1& operator=(const xdebugrecord1 &rhs); virtual ~xdebugrecord1(); ae_int_t i; alglib::complex c; real_1d_array a; };
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Appends copy of array to itself. Array is passed using "var" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugb1appendcopy(boolean_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Counts number of True values in the boolean 1D array. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
ae_int_t xdebugb1count(const boolean_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Replace all values in array by NOT(a[i]). Array is passed using "shared" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugb1not(boolean_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Generate N-element array with even-numbered elements set to True. Array is passed using "out" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugb1outeven(const ae_int_t n, boolean_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Counts number of True values in the boolean 2D array. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
ae_int_t xdebugb2count(const boolean_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Replace all values in array by NOT(a[i]). Array is passed using "shared" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugb2not(boolean_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Generate MxN matrix with elements set to "Sin(3*I+5*J)>0" Array is passed using "out" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugb2outsin(const ae_int_t m, const ae_int_t n, boolean_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Transposes array. Array is passed using "var" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugb2transpose(boolean_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Appends copy of array to itself. Array is passed using "var" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugc1appendcopy(complex_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Replace all values in array by -A[I] Array is passed using "shared" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugc1neg(complex_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Generate N-element array with even-numbered A[K] set to (x,y) = (K*0.25, K*0.125) and odd-numbered ones are set to 0. Array is passed using "out" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugc1outeven(const ae_int_t n, complex_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Returns sum of elements in the array. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
alglib::complex xdebugc1sum(const complex_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Replace all values in array by -a[i,j] Array is passed using "shared" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugc2neg(complex_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Generate MxN matrix with elements set to "Sin(3*I+5*J),Cos(3*I+5*J)" Array is passed using "out" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugc2outsincos(const ae_int_t m, const ae_int_t n, complex_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Returns sum of elements in the array. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
alglib::complex xdebugc2sum(const complex_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Transposes array. Array is passed using "var" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugc2transpose(complex_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Appends copy of array to itself. Array is passed using "var" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugi1appendcopy(integer_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Replace all values in array by -A[I] Array is passed using "shared" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugi1neg(integer_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Generate N-element array with even-numbered A[I] set to I, and odd-numbered ones set to 0. Array is passed using "out" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugi1outeven(const ae_int_t n, integer_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Returns sum of elements in the array. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
ae_int_t xdebugi1sum(const integer_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Replace all values in array by -a[i,j] Array is passed using "shared" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugi2neg(integer_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Generate MxN matrix with elements set to "Sign(Sin(3*I+5*J))" Array is passed using "out" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugi2outsin(const ae_int_t m, const ae_int_t n, integer_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Returns sum of elements in the array. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
ae_int_t xdebugi2sum(const integer_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Transposes array. Array is passed using "var" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugi2transpose(integer_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Creates and returns XDebugRecord1 structure: * integer and complex fields of Rec1 are set to 1 and 1+i correspondingly * array field of Rec1 is set to [2,3] -- ALGLIB -- Copyright 27.05.2014 by Bochkanov Sergey *************************************************************************/
void xdebuginitrecord1(xdebugrecord1 &rec1, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Returns sum of a[i,j]*(1+b[i,j]) such that c[i,j] is True -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
double xdebugmaskedbiasedproductsum(const ae_int_t m, const ae_int_t n, const real_2d_array &a, const real_2d_array &b, const boolean_2d_array &c, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Appends copy of array to itself. Array is passed using "var" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugr1appendcopy(real_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Returns sum of elements in the array. Internally it creates a copy of the array. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
double xdebugr1internalcopyandsum(const real_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Replace all values in array by -A[I] Array is passed using "shared" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugr1neg(real_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Generate N-element array with even-numbered A[I] set to I*0.25, and odd-numbered ones are set to 0. Array is passed using "out" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugr1outeven(const ae_int_t n, real_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Returns sum of elements in the array. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
double xdebugr1sum(const real_1d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Returns sum of elements in the array. Internally it creates a copy of a. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
double xdebugr2internalcopyandsum(const real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Replace all values in array by -a[i,j] Array is passed using "shared" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugr2neg(real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Generate MxN matrix with elements set to "Sin(3*I+5*J)" Array is passed using "out" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugr2outsin(const ae_int_t m, const ae_int_t n, real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Returns sum of elements in the array. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
double xdebugr2sum(const real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Transposes array. Array is passed using "var" convention. -- ALGLIB -- Copyright 11.10.2013 by Bochkanov Sergey *************************************************************************/
void xdebugr2transpose(real_2d_array &a, const xparams _xparams = alglib::xdefault);
/************************************************************************* This is debug function intended for testing ALGLIB interface generator. Never use it in any real life project. Creates and returns XDebugRecord1 structure: * integer and complex fields of Rec1 are set to 1 and 1+i correspondingly * array field of Rec1 is set to [2,3] -- ALGLIB -- Copyright 27.05.2014 by Bochkanov Sergey *************************************************************************/
void xdebugupdaterecord1(xdebugrecord1 &rec1, const xparams _xparams = alglib::xdefault);