STACKQUADRANT

b4rtaz/distributed-llama

Inference Engines

Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.

6.3
GitHub Metrics
Stars
2.9k
Forks
225
Open Issues
45
Watchers
51
Contributors
14
Weekly Commits
0
Language
C++
License
MIT
Last Commit
Apr 14, 2026
Created
Dec 5, 2023
Latest Release
v0.16.5
Release Date
Feb 2, 2026
Synced: Apr 16, 2026
Quality Scores
Documentation Qualityw: 20%
5.0

No dedicated docs site. Description: 137 chars. Stars signal: 2,898. Contributors: 14. Score: 5/10

Community Healthw: 20%
6.0

Stars: 2,898. Contributors: 14. Watchers: 51. Forks: 225. Issue ratio: 1.6%. Score: 6/10

Maintenance Velocityw: 15%
7.3

Last commit: 2d ago. Weekly commits: 0. Latest release: v0.16.5. Maturity bonus: 2.4y old. Score: 7.3/10

API Design & DXw: 20%
6.8

Stars/issues ratio: 64. No dedicated API docs. Permissive license: MIT. Popularity signal: 2,898 stars. Score: 6.8/10

Production Readinessw: 15%
6.6

Battle-tested: 2,898 stars. Peer review: 14 contributors. Versioned: v0.16.5. Licensed: MIT. Age: 2.4 years. Maintenance: last commit 2d ago. Score: 6.6/10

Ecosystem Integrationw: 10%
6.9

Fork interest: 225. Ecosystem: C++. Integration-friendly: MIT. Adoption: 2,898 stars. Score: 6.9/10

Tags
distributed-computingdistributed-llmllama2llama3llmllm-inferencellmsneural-networkopen-llm
Radar
Documentation Quality
Community Health
Maintenance Velocity
API Design & DX
Production Readiness
Ecosystem Integration