All data, plots, and animations on this site are my own unless otherwise stated
Visual Proof of Triangulating a Sphere
In the animation, I demonstrate the concept of homeomorphism in topology. By repeatedly subdividing the faces of an icosahedron and normalizing the new vertices to lie on the unit sphere, I show how the surface of the icosahedron can be continuously deformed into the surface of a sphere. This process, known as geodesic subdivision, visually proves that the icosahedron and the sphere are topologically equivalent, highlighting a homeomorphism between their surfaces.
In a 2D plane, uniformly distributed points grow into expanding purple discs, intersecting to form an alpha complex. This method captures the topological features of the dataset, illustrated through persistence diagrams and barcodes. A persistence diagram is a scatter plot that shows the birth and death of topological features, such as connected components, loops, and voids, as the scale changes. A barcode diagram represents these features as horizontal bars, where each bar's length indicates the lifespan of a particular feature. As the discs' radii increase, the process reveals key homological structures, including connected components (0D), loops (1D), and voids (2D). The barcode graph presents the following features: 0D bars show the birth and merging of connected components, 1D bars track the formation and closure of loops or cycles, and 2D bars highlight the emergence and filling of voids, elucidating the data's shape and connectivity in the plane. Of note are the formation of loops (blue), which are homeomorphically equivalent to a circle.
Points uniformly distributed on the surface of a sphere grow into larger spheres, intersecting and forming an alpha complex. This process again captures the topological features of the data on the surface, visualized through persistence diagrams and barcodes. As the radii of these growing spheres increase, the algorithm reveals the underlying homological structures, such as connected components (0D), loops (1D), and voids (2D). Of note is the void created by the accumulation of all the spheres, creating a hollow surface which is homeomorphically equivalent sphere (in green).
Work in Progress
Work in Progress
Work in Progress
Unoptimized Reservoir Computer Prediction of the Lorenz System
An unoptimized reservoir computer was trained on the Lorenz system for 70% of the 20,000 timesteps generated and tested on the remaining 30%. A forecast horizon was defined as the first point where the prediction deviated from the actual system by 25%. This is symbolized above by a red point and as line on each axis subplot to the left. The forecast horizon is highly dependent on definition and hyperparameters used to influence the reservoir. The work here and elsewhere with the Shaheen lab was built upon base code by G. Espitia[^1] at Medium and M. Francesco at ReservoirComputing.jl[^2].
Put simply, reservoir computing (RC) relies on the nonlinear interactions between nodes (one can think of them as neurons) to forecast complex system behavior. These nodes are located in “the reservoir,” a cache of randomly connected neurons that remain fixed throughout the calculation. This is in contrast to traditional artificial neural networks (ANN), where the nodes are subdivided into groups of layers (called the input, hidden, and output layers respectively).
Layers are trained in all neural networks by adjusting a set of weights to fit a unique problem context in a similar manner to the way in which biological neurons are primed to fire. Whereas the activation potential of biological neurons relies on the concentration of sodium and potassium ions, artificial neurons fire in response to these weighted inputs. Once a weighted input reaches a certain threshold, the neuron “fires,”[^3] passing the now changed data through to the next layer. There can be countless hidden layers in traditional ANNs.
The reservoir takes the place of the hidden layers in traditional ANNs. Instead of a myriad separate layers, the size of which can be computationally expensive, RCs are comprised of a single large layer of indiscriminately placed nodes with different connectivities. The nodes in this layer are not trained (thus, fixed), though the output layer is trained extensively. This significantly reduces hardware strain, leading to less power consumption while increasing computation efficiency[^4].
This nonlinear way in which the reservoir nodes communicate is what gives reservoir computing an advantage in not only efficiency but its ability to predict the nonlinear dynamics of various systems[^5], among these being the Lorenz, Oregonator, and Field & Györgyi systems. By toggling parameters like the activation function, ridge regression, reservoir size, reservoir radius, input scaling and others, one can optimize the system to better predict a chaotic system.
Work in Progress
Work in Progress
Espitia, G. (2023). Python Guide: Building a Reservoir Computer from Scratch. Retrieved from https://blog.stackademic.com/python-guide-building-a-reservoir-computer-from-scratch-c166d63038dc.
Francesco, M. (2023). Reservoir Computing: Basic Tutorial on Lorenz System. Retrieved from https://docs.sciml.ai/ReservoirComputing/stable/esn_tutorials/lorenz_basic/.
ScienceDirect. (n.d.). Artificial Neural Network. Retrieved from https://www.sciencedirect.com/topics/mathematics/artificial-neural-network
Frontiers in Computational Neuroscience. (2021, February 5). Computational Efficiency of a Modular Reservoir Network for Image Recognition. Frontiers in Computational Neuroscience, 15. https://doi.org/10.3389/fncom.2021.594337
Gouhei Tanaka, Toshiyuki Yamane, Jean Benoit Héroux, Ryosho Nakane, Naoki Kanazawa, Seiji Takeda, Hidetoshi Numata, Daiju Nakano, Akira Hirose, Recent advances in physical reservoir computing: A review, Neural Networks, Volume 115, 2019, Pages 100-123, ISSN 0893-6080, https://doi.org/10.1016/j.neunet.2019.03.005. (https://www.sciencedirect.com/science/article/pii/S0893608019300784)
Work in Progress
Work in Progress
Work in Progress