Pluralis Research

Pluralis Research

Home
Archive
About
Beyond Top-K: Pipeline Parallelism Over Slow Networks
A progress update on the core problem in Protocol Learning.
May 2 • 
Sameera Ramasinghe
1
1
SWARM Parallel with Asynchronous Updates
We significantly improve training reliability, robustness and speed of asynchronous pipeline-parallel training.
May 1 • 
Yan Zuo
 and 
Gil Avraham

April 2025

Efficient Asynchronous Low-Bandwidth Training on Heterogenous GPUs
We show asynchronous methods can surpass the synchronous method in DiLoCo training while supporting heterogenous GPUs.
Apr 30 • 
Thalaiyasingam Ajanthan
1

March 2025

A Third Path: Protocol Learning
Developing the true open-source AI
Mar 19 • 
Alexander Long
35
4

October 2024

Article 2: Protocol Learning, Protocol Models and the Great Convergence
Two enormous, previously disparate fields converge and a path towards the largest models to ever be trained is opened.
Oct 13, 2024 • 
Alexander Long
19

July 2024

Decentralized Training Looms
Collaborative Training of foundation models is closer to actualization than broadly understood. The popular view that low bandwidth node-to-node…
Jul 8, 2024 • 
Alexander Long
29
4
© 2025 Pluralis
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture