Pluralis Research
Subscribe
Sign in
Home
Archive
About
Beyond Top-K: Pipeline Parallelism Over Slow Networks
A progress update on the core problem in Protocol Learning.
May 2
•
Sameera Ramasinghe
Share this post
Pluralis Research
Beyond Top-K: Pipeline Parallelism Over Slow Networks
Copy link
Facebook
Email
Notes
More
SWARM Parallel with Asynchronous Updates
We significantly improve training reliability, robustness and speed of asynchronous pipeline-parallel training.
May 1
•
Yan Zuo
and
Gil Avraham
Share this post
Pluralis Research
SWARM Parallel with Asynchronous Updates
Copy link
Facebook
Email
Notes
More
April 2025
Efficient Asynchronous Low-Bandwidth Training on Heterogenous GPUs
We show asynchronous methods can surpass the synchronous method in DiLoCo training while supporting heterogenous GPUs.
Apr 30
•
Thalaiyasingam Ajanthan
Share this post
Pluralis Research
Efficient Asynchronous Low-Bandwidth Training on Heterogenous GPUs
Copy link
Facebook
Email
Notes
More
March 2025
A Third Path: Protocol Learning
Developing the true open-source AI
Mar 19
•
Alexander Long
26
Share this post
Pluralis Research
A Third Path: Protocol Learning
Copy link
Facebook
Email
Notes
More
3
October 2024
Article 2: Protocol Learning, Protocol Models and the Great Convergence
Two enormous, previously disparate fields converge and a path towards the largest models to ever be trained is opened.
Oct 13, 2024
•
Pluralis Research
18
Share this post
Pluralis Research
Article 2: Protocol Learning, Protocol Models and the Great Convergence
Copy link
Facebook
Email
Notes
More
July 2024
Decentralized Training Looms
Collaborative Training of foundation models is closer to actualization than broadly understood. The popular view that low bandwidth node-to-node…
Jul 8, 2024
•
Pluralis Research
27
Share this post
Pluralis Research
Decentralized Training Looms
Copy link
Facebook
Email
Notes
More
4
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts