vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support!

  • a16z

  • AMD

  • Anyscale

  • AWS

  • Crusoe Cloud

  • Databricks

  • DeepInfra

  • Dropbox

  • Lambda Lab


  • Replicate

  • Roblox

  • RunPod

  • Sequoia Capital

  • Trainy

  • UC Berkeley

  • UC San Diego

  • ZhenFund

We also have an official fundraising venue through OpenCollective. We plan to use the fund to support the development, maintenance, and adoption of vLLM.