Skip to content

llama.cpp model provider support #585

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

westonbrown
Copy link

Description

Adds llama.cpp model provider support to the Strands Agents SDK, enabling integration with locally-hosted llama.cpp enabling the ability to run agent workloads at the edge on resource constrained devices with small foundation models. This implementation provides full support for llama.cpp-specific features.

Key Features:

  • llama.cpp support: Direct integration with llama.cpp servers via OpenAI-compatible API
  • Advanced sampling parameters: Full support for llama.cpp-specific parameters (mirostat, top_k, min_p, typical_p, tfs_z, top_a, etc.)
  • Grammar constraints: GBNF grammar support for constrained generation
  • Multimodal capabilities: Audio and image content support for compatible models (e.g., Qwen2.5-Omni)
  • JSON schema validation: Native structured output with schema constraints
  • Tool calling support: Full support for function calling with proper formatting
  • Custom error handling: Specific exceptions for context overflow and server overload scenarios

Implementation Details:

  • Architecture: Inherits from base Model class following SDK patterns (similar to Ollama)
  • Parameter Handling:
    • OpenAI-compatible parameters (temperature, max_tokens, top_p, etc.) go in request root
    • llama.cpp-specific parameters go in extra_body for clean separation
    • Grammar and json_schema parameters are placed directly in request body
  • Supported Parameters:
    • llama.cpp-specific: repeat_penalty, top_k, min_p, typical_p, tfs_z, top_a, mirostat, mirostat_lr, mirostat_ent, penalty_last_n, n_probs, min_keep, ignore_eos, logit_bias, cache_prompt, slot_id, samplers
  • Test Coverage: 25 comprehensive tests covering all functionality

Related Issues

Enables local model support via llama.cpp servers

Documentation PR

Documentation to be added in follow-up PR

Type of Change

New feature

Testing

How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli

  • I ran hatch run prepare - All checks passed successfully
  • Comprehensive test suite with 25 tests covering:
    • Basic configuration and initialization
    • Request formatting with all parameter types
    • Streaming response handling
    • Structured output with JSON schemas
    • Grammar constraint application
    • Multimodal content (audio/image) formatting
    • Error handling (context overflow, server overload)
    • Tool calling and function formatting
    • All tests passing (100% success rate)

Example Usage:

from strands import Agent
from strands.models.llamacpp import LlamaCppModel

# Basic usage
model = LlamaCppModel(base_url="http://localhost:8080")
agent = Agent(model=model)
response = agent("Tell me about AI")

# With advanced parameters
model = LlamaCppModel(
    base_url="http://localhost:8080",
    params={
        "temperature": 0.7,
        "max_tokens": 100,
        "repeat_penalty": 1.1,
        "top_k": 40,
        "min_p": 0.05
    }
)

# Grammar constraints
model.use_grammar_constraint('''
    root ::= answer
    answer ::= "yes" | "no"
''')

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly (README.md updated to include llama.cpp in supported providers)
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

"type": "image_url",
}

# Handle audio content (not in standard ContentBlock but supported by llama.cpp)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we should start supporting audio and video content blocks for models in our ContentBlock ^^ wdyt @pgrayy

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a AudioContent to the core ContentBlock type would be a great follow-up enhancement!

This reverts commit 0fe0249.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants