11 KiB
OpenWebUI CLI v0.1.0-alpha Release Notes
Release Date: 2025-11-30 Status: Alpha Release Python: 3.11+ License: MIT
Overview
Welcome to the first alpha release of the official OpenWebUI CLI! This is a powerful command-line interface that lets you interact with your OpenWebUI instance directly from your terminal.
Whether you're automating workflows, integrating with scripts, or just prefer the command line, OpenWebUI CLI provides a comprehensive set of commands for:
- Authentication and token management
- Real-time streaming chat
- Retrieval-Augmented Generation (RAG)
- Model management
- Server administration
- Configuration management
This release includes full feature implementation with 80%+ test coverage and comprehensive error handling.
What's New in This Release
Headline Features
1. Streaming Chat Operations
Send messages and get real-time token-by-token responses directly in your terminal. Perfect for interactive workflows and rapid development.
openwebui chat send -m llama3.2:latest -p "Explain quantum computing"
2. Secure Authentication
Your tokens stay safe with OS-level keyring integration. We support multiple authentication methods:
- System keyring (recommended)
- Environment variables
- CLI flags
- No hardcoded credentials
3. Multi-Profile Support
Manage multiple OpenWebUI servers from a single machine. Switch between them effortlessly:
openwebui --profile production chat send -m gpt-4 -p "Hello"
openwebui --profile local chat send -m llama3.2:latest -p "Hi"
4. Comprehensive Model Management
Pull, delete, list, and inspect models with a clean CLI interface:
openwebui models list # List all models
openwebui models pull llama3.2 # Download a model
openwebui models delete llama3.2 # Remove a model
openwebui models info llama3.2 # Get model details
5. RAG Capabilities
Upload files, organize collections, and search your document knowledge base:
openwebui rag files upload ./documents/*.pdf
openwebui rag collections create "Project Documentation"
openwebui rag search "authentication flow"
6. Admin Tools
Get server statistics and diagnostics (requires admin role):
openwebui admin stats
openwebui admin users list
Installation
From PyPI (Recommended)
pip install openwebui-cli
From Source
git clone https://github.com/dannystocker/openwebui-cli.git
cd openwebui-cli
pip install -e .
With Optional Dependencies
pip install openwebui-cli[dev] # Development tools (pytest, mypy, ruff)
Troubleshooting Installation
Permission denied:
pip install --user openwebui-cli
# or use a virtual environment
python -m venv venv && source venv/bin/activate
pip install openwebui-cli
Keyring issues: If you're running in a container or headless environment without keyring support:
pip install keyrings.alt # Lightweight keyring backend
# or use environment variables/CLI flags instead
Quick Start
1. Initialize Configuration
openwebui config init
This creates a configuration file at:
- Linux/macOS:
~/.config/openwebui/config.yaml - Windows:
%APPDATA%\openwebui\config.yaml
2. Login
openwebui auth login
You'll be prompted for:
- Server URL (default:
http://localhost:8080) - Username
- Password
Your token is securely stored in your system keyring.
3. Send Your First Message
openwebui chat send -m llama3.2:latest -p "Hello, world!"
4. Continue a Conversation
openwebui chat send -m llama3.2:latest -p "Tell me more" --chat-id <CHAT_ID>
Common Commands
Authentication
openwebui auth login # Login and store token
openwebui auth whoami # Show current user
openwebui auth logout # Remove stored token
openwebui auth token show # Display current token (masked)
openwebui auth token refresh # Refresh your token
Chat
# Simple chat
openwebui chat send -m llama3.2:latest -p "Hello"
# No streaming
openwebui chat send -m llama3.2:latest -p "Hello" --no-stream
# With RAG context
openwebui chat send -m llama3.2:latest -p "Summarize this" --file <FILE_ID>
# Continue conversation
openwebui chat send -m llama3.2:latest -p "More info" --chat-id <CHAT_ID>
Models
openwebui models list # List all models
openwebui models info llama3.2 # Model details
openwebui models pull llama3.2 # Download model
openwebui models delete llama3.2 # Remove model
RAG
# Files
openwebui rag files upload ./docs.pdf
openwebui rag files list
openwebui rag files delete <FILE_ID>
# Collections
openwebui rag collections create "Docs"
openwebui rag collections list
openwebui rag search "topic" --collection <COLL_ID>
Configuration
openwebui config init # Initialize config
openwebui config show # Display current config
openwebui config set profiles.local.uri http://localhost:8080
Global Options
All commands support these global options:
# Token management
openwebui --token <TOKEN> chat send ...
# Server configuration
openwebui --uri http://localhost:8080 chat send ...
openwebui --profile production chat send ...
# Output formatting
openwebui --format json models list
openwebui --format yaml config show
# Behavior
openwebui --quiet chat send ... # Suppress non-essential output
openwebui --verbose chat send ... # Debug logging
openwebui --timeout 30 chat send ... # 30 second timeout
# Version
openwebui --version
Exit Codes
Every command returns a meaningful exit code:
| Code | Meaning | Example |
|---|---|---|
0 |
Success | Command completed successfully |
1 |
General error | Unexpected error occurred |
2 |
Usage error | Missing required arguments |
3 |
Auth error | Invalid token or credentials |
4 |
Network error | Connection refused or timeout |
5 |
Server error | 5xx response from server |
This makes it easy to use the CLI in scripts and automation:
openwebui chat send -m llama3.2 -p "Test" || exit $?
Configuration
OpenWebUI CLI uses XDG-compliant configuration paths. Create or edit ~/.config/openwebui/config.yaml (Linux/macOS) or %APPDATA%\openwebui\config.yaml (Windows):
# OpenWebUI CLI Configuration
version: 1
# Default profile to use
default_profile: local
# Server profiles for different environments
profiles:
local:
uri: http://localhost:8080
# Token stored securely in system keyring
production:
uri: https://openwebui.example.com
# Token stored securely in system keyring
# CLI defaults
defaults:
model: llama3.2:latest # Default model for chat
format: text # Output format: text, json, yaml
stream: true # Enable streaming by default
# Output preferences
output:
colors: true # Colored output
progress_bars: true # Show progress indicators
timestamps: false # Add timestamps to output
Important: Tokens are never stored in the config file. They're always kept secure in your system keyring.
Known Limitations
This is an alpha release. Please be aware of these limitations:
Expected to Change
- Command syntax may change before v1.0
- Output format is subject to refinement
- API may change based on feedback
Incomplete Features
- Some admin commands are partially implemented (stubs)
- Model pull operation shows basic progress (no percentage)
- No retry logic for network failures (yet)
Environment-Specific
- Keyring support varies by OS (works best on Linux/macOS/Windows)
- Streaming may fail behind certain proxies or firewalls
- Container/headless environments need
keyrings.altor env vars
Performance
- Large file uploads not yet optimized
- Batch operations not yet implemented
- No background/async task support yet
What's Coming in v0.1.1-alpha
- Complete remaining command implementations
- Enhanced error messages and recovery
- Retry logic for network failures
- Better progress reporting for long operations
- Performance improvements
Troubleshooting
Authentication Issues
"No keyring backend found"
# Solution 1: Use environment variable
export OPENWEBUI_TOKEN="your-token"
openwebui chat send -m llama3.2:latest -p "Hello"
# Solution 2: Use CLI flag
openwebui --token "your-token" chat send -m llama3.2:latest -p "Hello"
# Solution 3: Install lightweight keyring
pip install keyrings.alt
"Authentication failed" or "401 Unauthorized"
# Verify token
openwebui auth whoami
# Re-login
openwebui auth logout
openwebui auth login
# Check server URL
openwebui config show
Connection Issues
"Connection refused"
# Verify server is running
curl http://localhost:8080
# Check configured URL
openwebui config show
# Update URL
openwebui config set profiles.local.uri http://your-server:8080
"Connection timeout"
# Increase timeout
openwebui --timeout 30 chat send -m llama3.2 -p "Hello"
# Check server health and network
Chat/Streaming Issues
"Chat hangs or doesn't stream"
# Try without streaming
openwebui chat send -m llama3.2 -p "Hello" --no-stream
# Enable debug logging
openwebui --verbose chat send -m llama3.2 -p "Hello"
Development
Want to contribute? Great! Here's how to get started:
Clone and Setup
git clone https://github.com/dannystocker/openwebui-cli.git
cd openwebui-cli
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
pip install -e ".[dev]"
Run Tests
pytest # Run all tests
pytest -v # Verbose output
pytest --cov # With coverage report
pytest tests/test_chat.py # Single file
Code Quality
mypy openwebui_cli # Type checking
ruff check openwebui_cli # Linting
ruff format openwebui_cli # Auto-format
Debug Logging
openwebui --verbose chat send -m llama3.2 -p "Hello"
Community & Support
Report Issues
Found a bug or have a feature request? Please open an issue: https://github.com/dannystocker/openwebui-cli/issues
Provide Feedback
We'd love to hear how you're using OpenWebUI CLI! Share your feedback, use cases, and suggestions.
Contribute
Contributions are welcome! See the RFC proposal for design details.
Migration Guide (If Upgrading)
This is the first release, so there's nothing to migrate from. Just install and enjoy!
Version Information
- Version: 0.1.0-alpha
- Release Date: 2025-11-30
- Python Requirement: 3.11+
- Status: Alpha (breaking changes possible)
- License: MIT
Acknowledgments
- OpenWebUI - The amazing project we're extending
- Typer - CLI framework
- Rich - Beautiful terminal output
- Pydantic - Data validation
License
MIT License © 2025 InfraFabric Team
See LICENSE for details.
Changelog
See CHANGELOG.md for complete version history.