Introducing the Parallel Task MCP Server

4 min read Original article ↗

The Parallel Task MCP Server is a first-of-its-kind tool for AIs to perform complex knowledge tasks over data from the open web. It introduces a new concept to the MCP standard: asynchronous tasks, which enables users to continue working in their MCP client while Parallel’s web search tasks run in the background.

## A quick primer on the Task API

Parallel’s Task API is a unique and powerful primitive for building repeatable tasks on the web. At its core, the Task API combines web data (URLs, ranking, experts, and page contents) with LLM reasoning to perform knowledge tasks. A way to think of this is like telling a human to go and research every car model made in the year 1960. It would take them a few hours to make all the necessary searches, create a spreadsheet, and then return the results to you. Parallel scales up those kinds of tasks programmatically using search and AI.

Under the hood, the API is making multiple web searches across multiple steps of reasoning to understand and shape the data for an output. Any information retrieval that can be done on the open web is in scope for the Task API.

#### The API supports three levels of task complexity:

  1. Simple query -> Simple output
  2. Simple query -> Structured output
  3. Structured query -> Structured output

Every task returns comprehensive Basis outputs[Basis outputs](https://docs.parallel.ai/task-api/guides/access-research-basis) - citations linking to source materials, detailed reasoning for each field, relevant excerpts, and calibrated confidence scores. This verification framework makes the Task MCP Server suitable for production workflows where accuracy and auditability matter.

## The Parallel Task MCP server

The Task MCP Server provides two core capabilities: deep research tasks that generate comprehensive reports, and enrichment tasks that transform existing datasets with web intelligence. Built on the same infrastructure that powers our Task API, it delivers the highest quality at every price point while eliminating complex integration work.

**Deep Research Tasks** - Generate extensive, citation-backed reports on any topic. Whether conducting competitive intelligence, due diligence, or market research, agents can now delegate complex multi-hop research that previously required custom workflows.

An example of a Deep Research Task in the Parallel Playground

Wingstop Franchise Growth Analysis: U.S. Saturation and International Expansion Opportunities

![Wingstop Franchise Growth Analysis: U.S. Saturation and International Expansion Opportunities](https://cdn.sanity.io/images/5hzduz3y/production/fdfdcf7ad65d4c00551f3eb513b57573855b29ea-2328x1698.png)

**Enrichment Tasks** - Transform existing datasets with structured web intelligence. Upload a CSV of companies, contacts, or entities, define what fields you need enriched, and receive back a complete dataset with citations and reasoning for every addition.

An example of an Enrichment Task in the Parallel Playground

Illustration demonstrating deep research API concepts, web search capabilities, or AI agent integration features

![](https://cdn.sanity.io/images/5hzduz3y/production/02544e9bfc4ac7bfbea52a40aa6d2b159d24fbe1-3264x1606.png)

## An new way to work with MCP

The Task MCP Server uses a first-of-its-kind async architecture that lets agents start research tasks and continue executing other work without blocking. This is critical for production agents handling complex workflows— start a deep research task on competitor analysis, move on to enriching a prospect list, then retrieve the research results when complete.

Long-running deep research tasks that might take minutes no longer freeze your agent's execution. The result is that agents can orchestrate multiple parallel research streams and maintain responsiveness while conducting thorough web intelligence gathering.

## When to use the Parallel Task MCP

The Task MCP excels in both daily power-user workflows and development experimentation scenarios.

**For professionals** who use tools like ChatGPT or Claude for work, the Task MCP comes in handy for research tasks requiring high confidence, like doing a market analysis of the leading web hosting platforms across dimensions such as price or server location.

A conversation in Claude Desktop where the user asks for a query to be rerun using a different Processor

Illustration demonstrating deep research API concepts, web search capabilities, or AI agent integration features

![](https://cdn.sanity.io/images/5hzduz3y/production/df39f563b9af089584b8df9e1177fa73e1fedddb-1760x1440.png)

**For developers** building their own applications with the Parallel Task API, the MCP server can be a great way to quickly conduct testing, for example comparing the same prompt.

An example in Claude Desktop where the user invokes the Parallel Task MCP to do a supplier analysis exercise

Illustration demonstrating deep research API concepts, web search capabilities, or AI agent integration features

![](https://cdn.sanity.io/images/5hzduz3y/production/73cc987bd18dc99351a34dc2a58655e4462b8be4-1754x1334.png)

## State-of-the-art performance across price points

The Task MCP Server provides access to our full Processor lineup - from Lite to Ultra8x - each optimized to deliver best in class performance at its price point. Our Processors achieve state-of-the-art results[state-of-the-art results](/blog/deep-research-benchmarks) on benchmarks like WISER-Atomic and BrowseComp, with our Ultra8x Processor reaching **58%** accuracy on BrowseComp - higher than human expert performance.

That means your agent can dial up for critical research or dial down for routine enrichment, with transparent per-query pricing and no token-based billing complexity.

Parallel avatar

By Parallel

October 16, 2025

## Related Posts64

Actively + Parallel

Parallel Raises at $2 Billion Valuation to Scale Web Infrastructure for Agents

Search & Extract Benchmarks

How Finch is scaling plaintiff law with AI agents that research like associates

Genpact and Parallel Web Systems Partner to Drive Tangible Efficiency from AI Systems

Genpact & Parallel

DeepSearchQA: Parallel Task API benchmarks deepresearch

How Modal saves tens of thousands annually by building in-house GTM pipelines with Parallel

Opendoor and Parallel Case Study

Introducing stateful web research agents with multi-turn conversations

Parallel is now live on Tempo via the Machine Payments Protocol (MPP)

Kepler | Parallel Case Study

Introducing the Parallel CLI

Profound + Parallel Web Systems

How Harvey is expanding legal AI internationally with Parallel

Tabstack + Parallel Case Study

Parallel | Vercel

Product release: Authenticated page access for the Parallel Task API

Introducing structured outputs for the Monitor API

Product release: Research Models with Basis for the Parallel Chat API

Parallel + Cerebras

DeepSearch QA: Task API

Product release: Granular Basis

How Amp’s coding agents build better software with Parallel Search

Latency improvements on the Parallel Task API

Product release: Extract

FindAll API - Product Release

Product release: Monitor API

Parallel raises $100M Series A to build web infrastructure for agents

Product release - Parallel Search API

Benchmarks: SealQA: Task API

Introducing LLMTEXT, an open source toolkit for the llms.txt standard

Starbridge + Parallel

Building a market research platform with Parallel Deep Research

How Lindy brings state-of-the-art web research to automation flows

Introducing the Core2x Processor for improved compute control on the Task API

How Day AI merges private and public data for business intelligence

Full Basis framework for all Task API Processors

Building a real-time streaming task manager with Parallel

How Gumloop built a new AI automation framework with web intelligence as a core node

Introducing the TypeScript SDK

Building a serverless competitive intelligence platform with MCP + Task API

Introducing Parallel Deep Research reports

BrowseComp / DeepResearch: Task API

Building a Full-Stack Search Agent with Parallel and Cerebras

Webhooks for the Parallel Task API

Introducing Parallel: Web Search Infrastructure for AIs

Introducing SSE for Task Runs

A new line of advanced Processors: Ultra2x, Ultra4x, and Ultra8x

Introducing Auto Mode for the Parallel Task API

A linear dithering of a search interface for agents

Parallel Search MCP Server in Devin

Introducing Tool Calling via MCP Servers

Introducing the Parallel Search MCP Server

Starting today, Source Policy is available for both the Parallel Task API and Search API - giving you granular control over which sources your AI agents access and how results are prioritized.

The Parallel Task Group API

State of the Art Deep Research APIs

Introducing the Parallel Search API

Introducing the Parallel Chat API - a low latency web research API for web based LLM completions. The Parallel Chat API returns completions in text and structured JSON format, and is OpenAI Chat Completions compatible.

Parallel Web Systems introduces Basis with calibrated confidences - a new verification framework for AI web research and search API outputs that sets a new industry standard for transparent and reliable deep research.

The Parallel Task API is a state-of-the-art system for automated web research that delivers the highest accuracy at every price point.