Gallery
About
This tool helps users find the most suitable local large language model (LLM) for their hardware by ranking models based on benchmark results. It provides a straightforward way to compare the performance of different LLMs on specific hardware configurations. The rankings are available on the project's GitHub page, allowing users to make informed decisions when choosing an LLM.
Comments (0)
No comments yet. Be the first to comment!
Related Products
Parse LLM Markdown streams incrementally on the server or client
Find the best local LLM for your hardware, ranked by benchmarks
Watch a neural net learn to play Snake
JDS – a Copilot skill suite for structuring AI coding behavior
Containarium – self-hosted sandbox for AI agents, MCP-native
Halgorithem – Catching AI Hallucinations Using Trees, No AI in Pipeline