FastFlowLM FastFlowLM
How It Works Models Benchmarks Demos Test Drive
Testimonials
Team Community News Roadmap
Docs
How It Works Models Benchmarks Demos Test Drive
Testimonials
Team Community News Roadmap
Docs
GitHub Discord Email

Docs

Instructions

Overview
Install

Instructions

Overview CLI Server Basics Open WebUI Microsoft AI Toolkit Obsidian RAG + LangChain Web Search + LangChain

Models

Overview LLaMA DeepSeek Qwen Gemma MedGemma GPT-OSS LFM Whisper EmbeddingGemma

Benchmarks

Overview LLaMA 3 Gemma 3 Qwen 3 GPT-OSS LFM2 1.2B

🛠️ Instructions

FastFlowLM (FLM) is a deeply optimized runtime for local LLM inference on AMD NPUs —
ultra-fast, power-efficient, and 100% offline.

Its user interface and workflow are similar to Ollama, but purpose-built for AMD’s XDNA architecture.

This section will walk you through how to use FastFlowLM with examples.


📚 Sections

  • CLI basics
  • Server basics
  • OpenAPI / client usage
  • WebUI
  • LangChain RAG
  • LangChain Web Search
  • Obsidian integration
  • Microsoft AI Toolkit
FastFlowLM The leading LLM inference runtime for parallel NPU architectures

© 2025 FastFlowLM. All rights reserved.

Site

Technology Testimonials Company Docs

Connect

GitHub Discord Email