Convergence
A simple web application for comparing responses across multiple AI foundation models. Run queries simultaneously across different models and analyze inconsistencies in their responses.
Features
- Submit prompts to multiple AI models simultaneously
- Support for Anthropic Claude, OpenAI GPT, and Google Gemini models
- View extended thinking for supported models (Claude 4.1 Opus, GPT-5.1, Gemini 2.5 Pro)
- Automatic inconsistency detection using your default model
- Generate detailed critiques comparing model responses
- Clean, collapsible UI for managing multiple query rounds
- Runs entirely locally on your Mac Silicon machine
Supported Models
Anthropic Claude
- Claude 4.1 Opus (with extended thinking)
OpenAI
- GPT-5.1 (with extended thinking)
Google Gemini
- Gemini 2.5 Pro (with extended thinking)
Prerequisites
- Node.js (v18 or higher recommended)
- npm or yarn
- API keys for the services you want to use:
Installation
- Clone or download this repository:
- Install dependencies:
- Create a
.envfile by copying the example:
- Edit the
.envfile and add your API keys:
ANTHROPIC_API_KEY=your_anthropic_api_key_here OPENAI_API_KEY=your_openai_api_key_here GOOGLE_API_KEY=your_google_api_key_here PORT=3000
You only need to provide keys for the services you want to use. If you only have an Anthropic key, you'll only see Anthropic models in the UI.
Usage
- Start the server:
- Open your web browser and navigate to:
- Using the app:
- Enter your prompt in the text area
- Select one or more models to query
- Choose a default model for analysis (this model will analyze inconsistencies)
- Click "Submit Query"
- View responses from each model (including thinking for supported models)
- Review the automatic inconsistency analysis
- Optionally click "Generate Detailed Critique" for a deeper comparison
- Submit new queries - previous rounds will automatically collapse
How It Works
Query Flow
- You submit a prompt and select multiple models
- The app queries all selected models in parallel
- Responses are displayed side-by-side
- Your default model analyzes all responses for inconsistencies
- You can optionally request a detailed critique
Inconsistency Detection
The app uses your selected default model to:
- Compare responses from all models
- Identify significant disagreements or inconsistencies
- Confirm when responses are consistent
- Provide a summary of key differences
Detailed Critique
When you click "Generate Detailed Critique", the default model will:
- Point out specific inconsistencies between models
- Explain which approach might be more accurate
- Highlight areas where models agreed
- Suggest which response(s) might be most reliable
Architecture
Backend (server.js)
- Express.js server handling API requests
- Secure API key management via environment variables
- Support for multiple AI providers (Anthropic, OpenAI, Google)
- Parallel query execution for better performance
- Inconsistency analysis and critique generation
Frontend (public/)
- Pure vanilla JavaScript (no framework dependencies)
- Responsive design optimized for Mac displays
- Real-time updates as responses arrive
- Collapsible round management for clean UI
- XSS protection via proper HTML escaping
Security Notes
- API keys are stored in
.envand never exposed to the frontend - All API calls are proxied through the backend server
- The
.envfile is gitignored to prevent accidental commits - HTML content is properly escaped to prevent XSS attacks
Customization
Adding New Models
Edit server.js and add new model configurations to the MODELS object:
'your-model-key': { provider: 'anthropic', // or 'openai' or 'google' displayName: 'Your Model Name', modelId: 'api-model-id', supportsThinking: false }
Changing Styles
Edit public/style.css to customize the appearance. The CSS uses CSS variables for easy theming:
:root { --primary-color: #6366f1; --background: #0f172a; /* ... more variables */ }
Adjusting Token Limits
In server.js, modify the max_tokens parameter in the API calls:
max_tokens: 4096, // Adjust as needed
Troubleshooting
Models not appearing
- Check that your API keys are correctly set in
.env - Restart the server after updating
.env - Check the console for error messages
API errors
- Verify your API keys are valid and have sufficient credits
- Check that you have access to the models you're trying to use
- Review the browser console and server logs for detailed error messages
Server won't start
- Make sure port 3000 is not already in use
- Try changing the PORT in
.env - Ensure all dependencies are installed (
npm install)
Development
The app uses ES modules (type: "module" in package.json), so all imports use the import syntax.
To modify the app:
- Edit the relevant files
- Restart the server to see backend changes
- Refresh the browser to see frontend changes
License
MIT
Contributing
Feel free to submit issues or pull requests for improvements!