Reservoir computing (RC) is a machine learning method especially well suited to solving physical problems, by using an internal dynamic system known as a 'reservoir'. Many systems are suitable for use as an internal reservoir. A common choice is an echo state network (ESN), a network with recurrent connections that gives the RC a memory which it uses to efficiently solve many time-domain problems such as forecasting chaotic systems and hidden state inference. However, constructing an ESN involves a large number of poorly- understood meta-parameters, and the properties that an ESN must have to solve these tasks well are largely unknown.
In this dissertation, I explore what parts of an RC are absolutely necessary. I build ESNs that perform well at system forecasting despite an extremely simple internal network structure, without any recurrent connections at all, breaking one of the most common rules of ESN design. These simple reservoirs indicate that the role of the reservoir in the RC is only to remember a finite number of time-delays of the RCs input, and while a complicated network can achieve this, in many cases a simple one achieves this as well.
I then build upon a recent proof of the equivalence between a specific ESN construction and the nonlinear vector auto-regression (NVAR) method with my collaborators. The NVAR is an RC boiled down to its most essential components, taking the necessary time- delay taps directly rather than relying on an internal dynamic reservoir. I demonstrate these RCs-without-reservoirs on a variety of classical RC problems, showing that in many cases an NVAR will perform as well or better than an RC despite the simpler method. I then conclude with an example problem that highlights a remaining unsolved issue in the application of NVARs, and then look to a possible future where NVARs may supplant RCs.