Get started using Parallel chat models in LangChain.
Parallel provides real-time web research capabilities through an OpenAI-compatible chat interface, allowing your AI applications to access current information from the web.
API ReferenceFor detailed documentation of all features and configuration options, head to the ChatParallelWeb API reference.
Now we can instantiate our model object and generate responses. The default model is "speed" which provides fast responses:
Copy
from langchain_parallel import ChatParallelWebllm = ChatParallelWeb( model="speed", # temperature=0.7, # max_tokens=None, # timeout=None, # max_retries=2, # api_key="...", # If you prefer to pass api key in directly # base_url="https://api.parallel.ai", # other params...)
See the ChatParallelWeb API Reference for the full set of available model parameters.
OpenAI compatibilityParallel supports many OpenAI-compatible parameters for easy migration (e.g., response_format, tools, top_p), though most are ignored by the Parallel API. See the OpenAI Compatibility section for more details.
messages = [ ( "system", "You are a helpful assistant with access to real-time web information.", ), ("human", "What are the latest developments in AI?"),]ai_msg = llm.invoke(messages)ai_msg
Copy
AIMessage(content='Here\'s a summary of the latest AI news and breakthroughs as of ...', additional_kwargs={}, response_metadata={'model': 'speed', 'finish_reason': 'stop', 'created': 1764043410}, id='run--3866fa98-6ac9-4585-8d23-99c5542b582b-0')
Copy
print(ai_msg.content)
Copy
Here's a summary of the latest AI news and breakthroughs as of...
We can chain our model with a prompt template like so:
Copy
from langchain_core.prompts import ChatPromptTemplateprompt = ChatPromptTemplate( [ ( "system", "You are a helpful research assistant with access to real-time web information. " "Provide comprehensive answers about {topic} with current data.", ), ("human", "{question}"), ])chain = prompt | llmchain.invoke( { "topic": "artificial intelligence", "question": "What are the most significant AI breakthroughs in 2025?", })
Copy
AIMessage(content="Based on the provided search results, here's a summary of the significant AI breakthroughs and trends...", additional_kwargs={}, response_metadata={'model': 'speed', 'finish_reason': 'stop', 'created': 1764043419}, id='run--9c521362-6724-4299-9e65-0565ec13d997-0')
OpenAI-compatible APIChatParallelWeb is fully compatible with many OpenAI Chat Completions API parameters, making migration seamless. However, most advanced parameters (like response_format, tools, top_p) are accepted but ignored by the Parallel API.
Copy
llm = ChatParallelWeb( model="speed", # These parameters are accepted but ignored by Parallel response_format={"type": "json_object"}, tools=[{"type": "function", "function": {"name": "example"}}], tool_choice="auto", top_p=1.0, frequency_penalty=0.0, presence_penalty=0.0, logit_bias={}, seed=42, user="user-123")
The integration automatically handles message formatting and merges consecutive messages of the same type to satisfy API requirements:
Copy
from langchain.messages import HumanMessage, SystemMessage# These consecutive system messages will be automatically mergedmessages = [ SystemMessage("You are a helpful assistant."), SystemMessage("Always be polite and concise."), HumanMessage("What is the weather like today?")]# Automatically merged to single system message before API callresponse = llm.invoke(messages)