lazypr

Configuration

Configuration examples for different use cases

Configuration File

LazyPR stores configuration in ~/.lazypr. You can edit this file directly or use the lazypr config commands.

Minimal Configuration

The simplest setup to get started. Perfect for personal projects and first-time users.

~/.lazypr
# LazyPR Minimal Configuration
# The only required setting is your API key

GROQ_API_KEY=your-groq-api-key-here

# Provider defaults to groq if not specified
# PROVIDER=groq

# Alternative: Use OpenAI
# OPENAI_API_KEY=sk-your-openai-api-key-here
# PROVIDER=openai

# Alternative: Use Ollama (local)
# PROVIDER=openai
# OPENAI_BASE_URL=http://localhost:11434/v1
# OPENAI_API_KEY=ollama
# MODEL=llama3.3

# Alternative: Use LM Studio (local)
# PROVIDER=openai
# OPENAI_BASE_URL=http://localhost:1234/v1
# OPENAI_API_KEY=lmstudio
# MODEL=your-model-name

Get a free API key at console.groq.com

Setting Up Minimal Config

# Set your API key
lazypr config set GROQ_API_KEY=gsk_xxxxxxxxxxxxx

# Verify configuration
lazypr config list

Team Configuration

Standardized configuration for team environments. Ensures consistent PR descriptions across all team members.

~/.lazypr
# LazyPR Team Configuration
# Recommended settings for team consistency

# API Keys
GROQ_API_KEY=your-groq-api-key-here
CEREBRAS_API_KEY=your-cerebras-api-key-here

# Provider Settings
PROVIDER=groq
MODEL=llama-3.3-70b-versatile

# Team Standards
DEFAULT_BRANCH=main
LOCALE=en
FILTER_COMMITS=true

# Context for consistent style
CONTEXT=Please review this PR carefully and provide feedback

# Reliability Settings
MAX_RETRIES=3
TIMEOUT=30000

Team Setup Instructions

Each team member configures their own machine:

# Set required values
lazypr config set GROQ_API_KEY=gsk_xxxxxxxxxxxxx
lazypr config set PROVIDER=groq
lazypr config set MODEL=llama-3.3-70b-versatile
lazypr config set DEFAULT_BRANCH=main
lazypr config set LOCALE=en
lazypr config set FILTER_COMMITS=true
lazypr config set MAX_RETRIES=3
lazypr config set TIMEOUT=30000

Distribute a config file to the team:

# Create config file
cat > ~/.lazypr << 'EOF'
GROQ_API_KEY=YOUR_KEY_HERE
PROVIDER=groq
MODEL=llama-3.3-70b-versatile
DEFAULT_BRANCH=main
LOCALE=en
FILTER_COMMITS=true
MAX_RETRIES=3
TIMEOUT=30000
EOF

# Team members replace the API key
lazypr config set GROQ_API_KEY=their-actual-key

Never commit API keys to version control. Each team member should have their own key.

Multi-Provider Configuration

Configuration supporting multiple AI providers with easy switching. Ideal for production environments and cost optimization.

~/.lazypr
# LazyPR Multi-Provider Configuration
# Supports multiple providers for flexibility and fallback

# Provider API Keys
GROQ_API_KEY=your-groq-api-key-here
CEREBRAS_API_KEY=your-cerebras-api-key-here
OPENAI_API_KEY=sk-your-openai-api-key-here

# Active Provider (groq, cerebras, or openai)
PROVIDER=groq

# Optional: For OpenAI-compatible APIs (Ollama, LM Studio, etc.)
# OPENAI_BASE_URL=http://localhost:11434/v1

# Model Selection
MODEL=llama-3.3-70b-versatile

# Common Settings
DEFAULT_BRANCH=main
FILTER_COMMITS=true

# Increased reliability for production
MAX_RETRIES=5
TIMEOUT=45000

Switching Providers

# Switch to Cerebras
lazypr config set PROVIDER=cerebras

# Switch to OpenAI
lazypr config set PROVIDER=openai

# Or use flag for one-time override
lazypr main --provider cerebras
lazypr main --provider openai

# Check current provider
lazypr config get PROVIDER

Manual Fallback Strategy

# Try Groq first, fallback to Cerebras
lazypr main --provider groq || lazypr main --provider cerebras

Local AI Configuration

Configuration for running AI models locally with Ollama or LM Studio. Perfect for privacy, offline use, and zero API costs.

~/.lazypr
# LazyPR Local AI Configuration
# Run AI models completely offline and free

# Provider (use OpenAI-compatible mode)
PROVIDER=openai

# Local server endpoint
OPENAI_BASE_URL=http://localhost:11434/v1  # Ollama
# OPENAI_BASE_URL=http://localhost:1234/v1  # LM Studio

# Placeholder API key (not validated for local)
OPENAI_API_KEY=ollama

# Model name (must match installed model)
MODEL=llama3.3

# Standard settings
DEFAULT_BRANCH=main
FILTER_COMMITS=true

Benefits of Local AI:

  • 🆓 Completely free - no API costs
  • 🔒 Private - your code never leaves your machine
  • ⚡ Fast - no network latency
  • 📴 Offline - works without internet

Setting Up Local AI

# 1. Install Ollama from ollama.com

# 2. Pull a model
ollama pull llama3.3

# 3. Configure LazyPR
lazypr config set PROVIDER=openai
lazypr config set OPENAI_BASE_URL=http://localhost:11434/v1
lazypr config set OPENAI_API_KEY=ollama
lazypr config set MODEL=llama3.3

# 4. Generate PR
lazypr main
# 1. Install LM Studio from lmstudio.ai

# 2. Download a model in the app

# 3. Start the local server

# 4. Configure LazyPR
lazypr config set PROVIDER=openai
lazypr config set OPENAI_BASE_URL=http://localhost:1234/v1
lazypr config set OPENAI_API_KEY=lmstudio
lazypr config set MODEL=your-model-name

# 5. Generate PR
lazypr main

Configuration Reference

All Available Settings

SettingDescriptionDefault
GROQ_API_KEYGroq API key-
CEREBRAS_API_KEYCerebras API key-
OPENAI_API_KEYOpenAI API key (or placeholder for local)-
OPENAI_BASE_URLCustom OpenAI-compatible API endpoint-
PROVIDERAI provider (groq, cerebras, or openai)groq
MODELModel to useProvider default
DEFAULT_BRANCHTarget branch for comparisonmain
LOCALEOutput languageen
FILTER_COMMITSFilter merge/dependency commitstrue
CONTEXTCustom context for AI-
MAX_RETRIESRetry attempts on failure3
TIMEOUTRequest timeout in ms30000

Config Commands

# Set a value
lazypr config set SETTING=value

# Get a value
lazypr config get SETTING

# List all settings
lazypr config list

# Remove a setting
lazypr config set SETTING=

Environment Variables

You can also configure LazyPR using environment variables. These override config file settings.

# In your shell profile (~/.bashrc, ~/.zshrc)
export GROQ_API_KEY=gsk_xxxxxxxxxxxxx
export LAZYPR_PROVIDER=groq
export LAZYPR_MODEL=llama-3.3-70b-versatile

Environment variables take precedence over config file settings.

Security Best Practices

  1. Never commit API keys - Use environment variables or local config files
  2. Use separate keys - Each team member should have their own API key
  3. Rotate keys regularly - Update keys periodically for security
  4. Limit key permissions - Use keys with minimal required permissions

Validation

Verify your configuration is correct:

# Check all settings
lazypr config list

# Test with a dry run
lazypr main -u  # Shows token usage to verify API connection

On this page