From Firing Rates to Thought Flows: A Differential Equation Approach to the Brain
Table of Contents
- Introduction
- Neurons and the Single-Compartment Model
- The Hodgkin-Huxley Model: A Deeper Dive
- Simplified Neuron Models
- From Neurons to Populations: The Wilson-Cowan Equations
- Higher-Level Brain Dynamics: Neural Fields
- Practical Simulations and Python Examples
- Advanced Directions and Future Outlook
- Conclusion
Introduction
One of the central challenges in neuroscience is understanding how patterns of electrical activity in neural circuits translate to cognition, perception, and behavior. A powerful approach is to treat neural activity as a dynamical system, governed by differential equations that describe how neurons (and collections of neurons) change their states over time. By analyzing these equations, we get insights into how the brain might learn, form memories, or even generate conscious thought.
In this post, we begin with the basics of neuron models and gradually escalate to population-level descriptions. Along the way, you’ll see that from a mathematical standpoint, going from single-neuron firing rates to large-scale “thought flows�?is a systematic leap from local to global dynamics.
This blog walks you through:
- Single neuron models and ion channels
- The Hodgkin-Huxley equations and their simplified versions
- Firing-rate and population-level models like the Wilson-Cowan equations
- Extensions to continuous neural fields
- Practical Python code snippets for simulation
Let’s start by reminding ourselves of the fundamental building blocks: neurons.
Neurons and the Single-Compartment Model
A neuron’s primary function is to receive, process, and transmit electrical signals. The neuron’s membrane potential is typically the main variable of interest. Often, the simplest way to describe this membrane potential is via a single-compartment model. In essence, we treat the neuron as an electrically active “circuit element�?with:
- Capacitance ( C )
- Leak conductance ( g_L )
- Leak reversal potential ( E_L )
The voltage across the membrane at time ( t ) is ( V(t) ). The classic single-compartment model often starts with:
[ C \frac{dV}{dt} = -g_L,(V - E_L) + I_{\text{ext}}, ]
where ( I_{\text{ext}} ) is the external current injected into the neuron.
Intuition Behind This Model
- Capacitor: The neuron’s lipid bilayer acts like a capacitor, storing and releasing charge.
- Resistor: Ion channels (leak channels in this simplest form) act as resistors.
- External Current: Could be synaptic input from other neurons or an experimental current injection.
In this basic model, the neuron integrates incoming current. When ( V ) increases, it eventually triggers an action potential in more realistic models. But in the single-compartment model, it may just keep increasing if there’s no additional mechanism to cap or reset the voltage.
Despite its simplicity, this model sets the stage: the membrane voltage is our central variable, and the different types of ion channels (when added) shape the dynamics.
The Hodgkin-Huxley Model: A Deeper Dive
The celebrated Hodgkin-Huxley model provides a more physiologically accurate description of how neurons generate action potentials. Rather than a single leak current, we include multiple ionic currents:
[ C \frac{dV}{dt} = -I_{\text{Na}} - I_{\text{K}} - I_{\text{L}} + I_{\text{ext}}, ]
where
- ( I_{\text{Na}} = g_{\text{Na}} m^3 h (V - E_{\text{Na}}) ) (Sodium current)
- ( I_{\text{K}} = g_{\text{K}} n^4 (V - E_{\text{K}}) ) (Potassium current)
- ( I_{\text{L}} = g_{\text{L}} (V - E_{\text{L}}) ) (Leak current)
The variables ( m ), ( h ), and ( n ) control the gating of sodium and potassium channels:
[ \frac{dm}{dt} = \alpha_m(V) (1 - m) - \beta_m(V), m, ] [ \frac{dh}{dt} = \alpha_h(V) (1 - h) - \beta_h(V), h, ] [ \frac{dn}{dt} = \alpha_n(V) (1 - n) - \beta_n(V), n. ]
Here, ( \alpha ) and ( \beta ) are voltage-dependent rate functions, derived from experimental measurements.
Why It Matters
- Spiking Behavior: The interplay of sodium and potassium currents explains the rapid upswing and downswing of the spike.
- Threshold and Refractory Period: Voltage-gated channels introduce thresholds and refractory periods, phenomena we observe in real neurons.
- Physiological Basis: The model ties closely to experimental data, providing a template for understanding excitability.
Although the Hodgkin-Huxley framework gives deep insight, it can be computationally expensive for large networks, leading many researchers to look for simplifications.
Simplified Neuron Models
When simulating large networks with thousands to millions of neurons, running a full Hodgkin-Huxley model becomes impractical. Instead, neuroscientists use simplified differential equations that preserve key features (like spiking thresholds) but save computational resources.
1. The Integrate-and-Fire Model
One of the simplest is the “leaky integrate-and-fire�?(LIF) model:
[ C \frac{dV}{dt} = -g_L (V - E_L) + I_{\text{syn}}(t), ]
with a rule stating that when ( V ) reaches a threshold ( V_{\text{th}} ), the neuron fires a spike, and ( V ) is reset to ( V_{\text{reset}} ).
Key Benefits
- Computationally simple
- Retains a notion of threshold
- Can be extended to include different synaptic currents
2. The Izhikevich Model
The Izhikevich model is a two-variable system balancing simplicity with a remarkable range of firing patterns:
[ \frac{dV}{dt} = 0.04V^2 + 5V + 140 - u + I, ] [ \frac{du}{dt} = a(bV - u). ]
When ( V = 30 , \text{mV} ), the model fires a spike, after which: [ V \leftarrow c, \quad u \leftarrow u + d. ]
This model captures bursting, chattering, and other complex patterns observed in real neurons, all with fewer computations than Hodgkin-Huxley.
From Neurons to Populations: The Wilson-Cowan Equations
A single neuron is just the tip of the iceberg. In real brains, behavior emerges from large ensembles of interacting neurons. Describing the average activity of such ensembles is the basis of firing-rate models.
Wilson-Cowan Basic Form
Wilson and Cowan proposed that instead of following the detailed spikes, we can track the average firing rate ( r ) of populations of excitatory (E) and inhibitory (I) neurons:
[ \frac{dE}{dt} = -E + S \big( w_{EE}E - w_{EI}I + I_E \big), ] [ \frac{dI}{dt} = -I + S \big( w_{IE}E - w_{II}I + I_I \big). ]
Here,
- ( E ) and ( I ) range from 0 to 1, representing normalized firing rates of excitatory and inhibitory populations.
- ( w_{XY} ) are connectivity weights between populations X and Y.
- ( S(\cdot) ) is a sigmoidal function representing how input current is converted to firing rate.
- ( I_E ) and ( I_I ) are external inputs to each population.
Interpretation
- Excitatory population: Increases the net firing rate when activated.
- Inhibitory population: Reduces the net firing rate, balancing excitatory drive.
- Global Behavior: By analyzing steady states or bifurcations of these equations, researchers can predict the conditions under which the population might exhibit stable firing, oscillations, or chaos.
Higher-Level Brain Dynamics: Neural Fields
The jump from local populations to the entire cortex introduces the concept of neural fields, sometimes called “continuous networks.�?Here, the brain is modeled as a continuous sheet, and the activity at each point on this sheet depends on nearby activity.
Typical Neural Field Equation
A simple neural field equation might look like:
[ \frac{\partial u(x,t)}{\partial t} = -u(x,t) + \int_{\Omega} w(x,y),S\big(u(y,t)\big),dy + I(x,t). ]
- ( u(x,t) ) is the firing rate at position ( x ).
- ( w(x,y) ) is a kernel describing connectivity between locations ( x ) and ( y ).
- ( S ) is the output (sigmoidal) function.
- ( I(x,t) ) is external input (e.g., a stimulus).
Neural field models capture phenomena like traveling waves of activity, persistent states, and pattern formation that might underlie perception and memory.
Significance
- Spatial Continuity: Allows simulation and analysis of how activity “waves�?propagate across the cortex.
- Pattern Analysis: Researchers can explore how local excitation and long-range inhibition shape the emergent patterns of neural activity.
- Bridging Imaging and Physiology: These field models can relate to EEG and fMRI data, bridging micro- and macro-level observations.
Practical Simulations and Python Examples
Let’s walk through a simple Python snippet demonstrating a minimal model—first for a single LIF neuron, then for a small Wilson-Cowan system.
1. Simulating a Leaky Integrate-and-Fire Neuron
Below is a basic Python script using Euler’s method. In practice, you might use more sophisticated integrators (e.g., odeint from SciPy), but this gives the general idea.
import numpy as npimport matplotlib.pyplot as plt
# Parametersdt = 0.1 # time step (ms)t_max = 200 # total time (ms)time = np.arange(0, t_max, dt)
C = 1.0 # membrane capacitance (uF/cm^2)gL = 0.1 # leak conductance (mS/cm^2)EL = -65.0 # leak reversal potential (mV)V_th = -50.0 # spiking threshold (mV)V_reset = -70.0 # reset potential (mV)I_ext = 2.0 # external current (uA/cm^2)
# InitializeV = np.zeros_like(time)V[0] = -70.0
spike_times = []
for i in range(1, len(time)): dV = ( -gL * (V[i-1] - EL) + I_ext ) / C V[i] = V[i-1] + dV * dt
# Check for spike if V[i] >= V_th: V[i] = V_reset spike_times.append(time[i])
plt.figure()plt.plot(time, V, label='Membrane Voltage')plt.scatter(spike_times, [0]*len(spike_times), color='red', marker='x', label='Spikes')plt.xlabel('Time (ms)')plt.ylabel('Voltage (mV)')plt.title('Leaky Integrate-and-Fire Simulation')plt.legend()plt.show()In this simulation, the neuron’s membrane voltage integrates the external current until it crosses ( V_{\text{th}} ). When it does, we reset it to mimic the spike.
2. Small Wilson-Cowan Network
We can also implement a small network with an excitatory (E) and inhibitory (I) population:
import numpy as npimport matplotlib.pyplot as plt
def S(x, theta=1.0): """Sigmoid function.""" return 1.0 / (1.0 + np.exp(-x/theta))
dt = 0.01t_max = 50time = np.arange(0, t_max, dt)
# ParameterswEE, wEI = 10.0, -6.0wIE, wII = 10.0, -6.0I_E, I_I = 1.5, 1.0 # External inputs
E_hist = []I_hist = []
# Initial conditionsE, I = 0.1, 0.1
for _ in time: dE = -E + S(wEE*E + wEI*I + I_E) dI = -I + S(wIE*E + wII*I + I_I) E += dE * dt I += dI * dt
E_hist.append(E) I_hist.append(I)
plt.plot(time, E_hist, label='E population')plt.plot(time, I_hist, label='I population')plt.xlabel('Time')plt.ylabel('Activity')plt.title('Wilson-Cowan Model Simulation')plt.legend()plt.show()Notice how this code directly implements the Wilson-Cowan equations�?simple form, illustrating how populations settle into a steady state or exhibit low-amplitude oscillations, depending on parameter choices.
Advanced Directions and Future Outlook
Differential equations form the backbone of many models of neural computation. But capturing cognition in all its complexity requires bridging multiple scales and including more biological detail—or sometimes more abstraction—depending on the question at hand. Some advanced directions include:
-
Detailed Biophysical Networks
- Multi-compartmental models that simulate dendritic trees and localized synaptic inputs.
- Realistic conduction delays, short-term synaptic plasticity, and more sophisticated channel types.
-
Spiking Network Theory and Synchronization
- Large-scale spiking networks can exhibit unique behaviors like synchronized oscillations, chaotic dynamics, or traveling waves.
- Tools like mean-field approximations help reduce complexity.
-
Adaptive Neural Fields
- Neural field models that incorporate plasticity rules (e.g., Hebbian learning) to explain the emergence of cortical maps.
-
Neuro-inspired Machine Learning
- Spiking neural networks (SNNs) in computational neuroscience may inspire next-generation machine learning systems.
- Training spiking models remains challenging, but bridging spiking dynamics and artificial neural networks is an active area of research.
-
Stochastic Approaches
- Real neurons have noisy inputs and probabilistic release of neurotransmitters.
- Stochastic differential equations (SDEs) capture fluctuations in membrane potentials and synaptic currents.
-
Connecting to Cognition and Consciousness
- Dynamical systems approaches to higher-level functions such as working memory, decision-making, and even consciousness.
- Models that incorporate large-scale connectivity and complexity might yield insights into how the brain transitions between cognitive states.
Conclusion
From the humble beginnings of single-compartment descriptions to the grandeur of neural fields spanning entire cortical regions, differential equations unify our understanding of the brain’s electrical activity. They offer both microscopic detail—how ion channels produce a spike—and macroscopic scope—how brain regions interact and potentially shape our thoughts.
For those new to computational neuroscience, starting with simple LIF or Wilson-Cowan models is an excellent way to build intuition. As you progress, exploring Hodgkin-Huxley or neural field models can give deeper insight into the physiological underpinnings and large-scale patterns of brain activity.
Ultimately, modeling the brain with differential equations is not just about simulating voltage traces. It’s about decoding how networks of neurons give rise to the dynamic landscapes we associate with perception, action, and thought. By continuing to refine these models and connecting them with experimental data, we move one step closer to unraveling the vast complexities of the mind—perhaps even glimpsing how firing rates evolve into flows of thought.
Happy modeling!