Skip to content

Commit

Permalink
deploy: a82e3e9
Browse files Browse the repository at this point in the history
  • Loading branch information
NanneD committed Aug 14, 2024
0 parents commit 36f129e
Show file tree
Hide file tree
Showing 60 changed files with 5,113 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 02ef5ebb17889f7b71c7616990e0868f
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file added .doctrees/environment.pickle
Binary file not shown.
Binary file added .doctrees/gradientdescent.doctree
Binary file not shown.
Binary file added .doctrees/index.doctree
Binary file not shown.
Binary file added .doctrees/spsa.doctree
Binary file not shown.
Empty file added .nojekyll
Empty file.
21 changes: 21 additions & 0 deletions _images/GD_logo.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
26 changes: 26 additions & 0 deletions _images/SPSA_logo.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
35 changes: 35 additions & 0 deletions _sources/gradientdescent.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
.. _gradientdescent:

Gradient Descent
================

What is gradient descent?
-------------------------

Gradient descent is an iterative optimization algorithm that can be used to find a minimum of a function :math:`f(\theta)`. At each iteration, it takes a step in the direction of the *negative* gradient :math:`\nabla f(\theta)`. The algorithm can also be used to find a maximum. In that case, the algorithm should take a step in the direction of the gradient.

How does gradient descent work?
-------------------------------

The gradient descent algorithm is as follows:

.. topic:: Gradient Descent Algorithm

**Input**: Choose starting value :math:`\theta_0`, learning rate :math:`\epsilon` and take :math:`i=0`.

**Algorithm**:

1. Calculate :math:`\nabla f(\theta_i)`.
2. Update :math:`\theta_{i+1}=\theta_i-\epsilon \nabla f(\theta_i)`.
3. (a) stop if :math:`i` large or :math:`|\theta_{i+1}-\theta_{i}|` small enough.
(b) else: update :math:`i` to :math:`i+1` and go back to (1).



Binder
------

If you want to experiment with the gradient descent algorithm, you can use the `provided Jupyter Notebook <https://github.com/NanneD/SOLT/blob/main/notebooks/GradientDescent.ipynb>`_. You can run the notebook directly in your browser by using Binder; simply click on the following button to open the notebook:

.. image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/NanneD/SOLT/HEAD?labpath=notebooks%2FGradientDescent.ipynb
44 changes: 44 additions & 0 deletions _sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
:html_theme.sidebar_secondary.remove:

SOLT
====

| **Release** |release|
| **Date** |today|
SOLT (Stochastic Optimization Learning Tool) is a learning tool about gradient descent and stochastic optimization, and in particular the simultaneous perturbation stochastic approximation (SPSA) algorithm. This website contains information about gradient descent, SPSA and N-dimensional SPSA.

You can experiment with the various algorithms through Jupyter Notebooks that contain implementations of the algorithms. You can directly open the notebooks in a browser with Binder by clicking on the following button:

.. image:: https://mybinder.org/badge_logo.svg
:align: center
:target: https://mybinder.org/v2/gh/NanneD/SOLT/HEAD

| **License**
| The GNU Affero General Public License v3 (AGPL-3.0) license is used. For more information, please see the `GitHub repository <https://github.com/NanneD/SOLT>`_.
| **Contributing**
| Please see the README file on the `GitHub repository <https://github.com/NanneD/SOLT>`_ for information on how to contribute.
.. grid:: 1 2 2 2

.. grid-item-card::
:link: gradientdescent
:width: 75%
:link-type: ref
:link-alt: Gradient Descent
:img-background: _static/GD_logo.svg

.. grid-item-card::
:link: spsa
:width: 75%
:link-type: ref
:link-alt: SPSA
:img-background: _static/SPSA_logo.svg


.. toctree::
:hidden:

gradientdescent
spsa
46 changes: 46 additions & 0 deletions _sources/spsa.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
.. _spsa:

SPSA
====

What is SPSA?
-------------

Simultaneous Perturbation Stochastic Approximation (SPSA) is an algorithm developed by `Spall <https://www.jhuapl.edu/spsa/>`_. It can be used if noisy and unbiased measurements of the gradient :math:`g(\boldsymbol{\theta}`) are available. It can also be used if only (noisy) measurements of the loss function :math:`f(\boldsymbol{\theta})` are available.

The advantage of SPSA compared to other algorithms is that only two loss measurements are required to generate an update. Therefore, SPSA is scalable.

How does SPSA work?
-------------------

Let :math:`\eta_i \in (0, \infty)` be a perturbation vector and :math:`\Delta_i` be a random vector such that :math:`\{\Delta_i\}` is an iid sequence with :math:`\Delta_i(k)` and :math:`1/\Delta_i(k)` bounded and symmetric around zero. The components :math:`\Delta_i(k)` are mutually independent. In practice, often the following binary random variable is used for :math:`\Delta_i`:

.. math::
\mathbb{P}(\Delta_i(j) = -1) = \frac{1}{2} = \mathbb{P}(\Delta_i(j) = 1),
for all :math:`i` and :math:`j`.


The SPSA algorithm is then as follows:

.. topic:: SPSA Algorithm

**Input**: Choose starting value :math:`\theta_0`, learning rate :math:`\epsilon` and take :math:`i = 0`.

**Algorithm**:

1. Calculate :math:`(g_i^{SPSA}(\theta_i))(j) = \frac{f(\theta_i+\eta_i\Delta_i)-f(\theta_i-\eta_i\Delta_i)}{2\eta_i\Delta_i(j)}`.
2. Update :math:`\theta_{i+1} = \theta_{i} - \epsilon (g_i^{SPSA}(\theta_i))`.
3. (a) stop if :math:`i` large or :math:`|\theta_{i+1}-\theta_{i}|` small enough.
(b) else: update :math:`i` to :math:`i+1` and go back to (1).

Binder
------

If you want to experiment with the SPSA algorithm, you can use the `provided Jupyter Notebook <https://github.com/NanneD/SOLT/blob/main/notebooks/SPSA.ipynb>`_. If you want to experiment with the N-dimensional version of the algorithm, then you can use `this Jupyter Notebook <https://github.com/NanneD/SOLT/blob/main/notebooks/SPSA-ND.ipynb>`_.

You can also run the notebooks directly in your browser by using Binder; simply click on the following button to open the SPSA notebook:

.. image:: https://mybinder.org/badge_logo.svg
:target: https://mybinder.org/v2/gh/NanneD/SOLT/HEAD?labpath=notebooks%2FSPSA.ipynb
101 changes: 101 additions & 0 deletions _sphinx_design_static/design-tabs.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
// @ts-check

// Extra JS capability for selected tabs to be synced
// The selection is stored in local storage so that it persists across page loads.

/**
* @type {Record<string, HTMLElement[]>}
*/
let sd_id_to_elements = {};
const storageKeyPrefix = "sphinx-design-tab-id-";

/**
* Create a key for a tab element.
* @param {HTMLElement} el - The tab element.
* @returns {[string, string, string] | null} - The key.
*
*/
function create_key(el) {
let syncId = el.getAttribute("data-sync-id");
let syncGroup = el.getAttribute("data-sync-group");
if (!syncId || !syncGroup) return null;
return [syncGroup, syncId, syncGroup + "--" + syncId];
}

/**
* Initialize the tab selection.
*
*/
function ready() {
// Find all tabs with sync data

/** @type {string[]} */
let groups = [];

document.querySelectorAll(".sd-tab-label").forEach((label) => {
if (label instanceof HTMLElement) {
let data = create_key(label);
if (data) {
let [group, id, key] = data;

// add click event listener
// @ts-ignore
label.onclick = onSDLabelClick;

// store map of key to elements
if (!sd_id_to_elements[key]) {
sd_id_to_elements[key] = [];
}
sd_id_to_elements[key].push(label);

if (groups.indexOf(group) === -1) {
groups.push(group);
// Check if a specific tab has been selected via URL parameter
const tabParam = new URLSearchParams(window.location.search).get(
group
);
if (tabParam) {
console.log(
"sphinx-design: Selecting tab id for group '" +
group +
"' from URL parameter: " +
tabParam
);
window.sessionStorage.setItem(storageKeyPrefix + group, tabParam);
}
}

// Check is a specific tab has been selected previously
let previousId = window.sessionStorage.getItem(
storageKeyPrefix + group
);
if (previousId === id) {
// console.log(
// "sphinx-design: Selecting tab from session storage: " + id
// );
// @ts-ignore
label.previousElementSibling.checked = true;
}
}
}
});
}

/**
* Activate other tabs with the same sync id.
*
* @this {HTMLElement} - The element that was clicked.
*/
function onSDLabelClick() {
let data = create_key(this);
if (!data) return;
let [group, id, key] = data;
for (const label of sd_id_to_elements[key]) {
if (label === this) continue;
// @ts-ignore
label.previousElementSibling.checked = true;
}
window.sessionStorage.setItem(storageKeyPrefix + group, id);
}

document.addEventListener("DOMContentLoaded", ready, false);
1 change: 1 addition & 0 deletions _sphinx_design_static/sphinx-design.min.css

Large diffs are not rendered by default.

21 changes: 21 additions & 0 deletions _static/GD_logo.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 14 additions & 0 deletions _static/SOLT.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 7 additions & 0 deletions _static/SOLT_favicon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 36f129e

Please sign in to comment.