Anthropic Claude API
Safe, steerable LLMs — Claude Opus, Sonnet & Haiku
Anthropic’s Claude API provides access to the Claude family of large language models — Opus, Sonnet, and Haiku — designed with safety and alignment at the core. Claude excels at long-context tasks (200K token window), nuanced instruction-following, coding, summarization, and enterprise chat applications. The API is OpenAI-compatible in structure, making migration straightforward. Available on AWS Bedrock, Google Cloud Vertex AI, and directly via API. Used heavily in legal, healthcare, and enterprise settings where reliability and reduced hallucination matter most.
API Details
Categories
Frequently Asked Questions
Claude 3.5 Sonnet costs $3 per million input tokens and $15 per million output tokens u2014 comparable to GPT-4o. Claude Haiku is significantly cheaper at $0.25/$1.25 per million tokens, making it ideal for high-volume applications. Claude Opus 3 is the premium tier at $15/$75 per million tokens.
Claude 3 models support a 200,000 token context window u2014 one of the largest available. This is roughly 150,000 words, or about 500 pages of text. This makes Claude ideal for analysing long documents, entire codebases, or lengthy conversations without losing context.
Yes. Claude is available via Amazon Bedrock (AWS) and Google Cloud Vertex AI in addition to Anthropic's direct API. Using Bedrock or Vertex AI means your data stays within your existing cloud infrastructure, which is important for enterprise compliance and data residency requirements.
Yes. Claude supports tool use with the same concept as OpenAI's function calling. You define tools as JSON schemas and Claude decides when and how to call them. The API follows an OpenAI-compatible structure, making it relatively straightforward to migrate existing integrations.
