DeepSeek Statistics and Insights 2026
Here is a look at DeepSeek's data and stats. Find key insights to help you make sense of everything DeepSeek.

DeepSeek Overview
| Category | Details |
| Founder: | Liang Wenfeng |
| Founded On: | May 1, 2023 |
| Headquarters: | Hangzhou, China |
| Parent Company: | High-Flyer (Hedge Fund) |
| Model Architecture: | Mixture of Experts (MoE) |
| Total Parameters (V3/R1): | 671 Billion |
| Global Launch: | January 20, 2025 |
Funding and Valuation
| Metric | Detail |
| Latest Valuation (Early 2025) | $3.4 billion |
| Total Venture Funding Raised | Over $1.1 billion |
| Series C Funding (Q1 2025) | $520 million |
| Series B Funding (Late 2024) | $310 million |
| Key Investors | Sequoia Capital, Lightspeed, Andreessen Horowitz, Accel, Index Ventures |
| Initial Funder | High-Flyer (Chinese hedge fund) |
| Company Status | Private; not listed on stock exchanges |
Key Insight: DeepSeek secured over $1.1 billion in funding and achieved a $3.4 billion valuation by early 2025, supported by prominent venture capital firms.
Key Events Timeline
| Date | Event |
| May 2023 | DeepSeek AI is founded in Hangzhou, China. |
| Nov 2023 | Releases its first open-source model, DeepSeek Coder. |
| Jan 10, 2025 | Launches its chatbot app on iOS and Android. |
| Jan 27, 2025 | Becomes the #1 most downloaded free app on the U.S. iOS App Store. |
| Jan 27, 2025 | Reports large-scale cyberattacks, limiting new user sign-ups. |
| Aug 2025 | Releases the powerful DeepSeek-V3.1 hybrid model. |
Resource Utilization Efficiency
| Metric | Detail |
| Training Cost (DeepSeek-V3) | $5.5 million |
| Comparative Training Cost | Approximately 1/18th the cost of building OpenAI’s GPT-4 |
| Training Resources Used | 2.788 million H800 GPU hours using around 2,000 Nvidia H800 chips |
| Architecture Efficiency | The Mixture-of-Experts (MoE) model has 671 billion total parameters but only 37 billion are activated for any given task. |
| On-Device Performance | Quantized versions support on-device inference with under 8GB of VRAM. |
Key Insight: DeepSeek’s efficiency is evident in its significantly lower training costs compared to competitors.
Market Impact on US Stock Market
| Company | Stock Drop | Market Cap Loss | Impact Date |
|---|---|---|---|
| Nvidia | -17% | $600 billion | January 27, 2025 |
| Microsoft | -2.14% | – | January 27, 2025 |
| Google (Alphabet) | -4% | – | January 27, 2025 |
| S&P 500 Tech Sector | -5.6% | – | January 27, 2025 |
| Nasdaq Composite | -3.4% | – | January 27, 2025 |
| Total US Market | – | $1 trillion | January 27, 2025 |
Key Insight: DeepSeek’s global launch on January 2025 triggered the largest single-day tech sector decline since September 2020 in the U.S, wiping out $1 trillion in US market value.
User Demographics by Age and Platform
| Age Group | iOS Users | Android Users | Difference |
|---|---|---|---|
| 18-24 years | 38.7% | 44.9% | +6.2% Android |
| 25-34 years | 22.1% | 13.2% | +8.9% iOS |
| 35-49 years | 15.3% | 14.9% | +0.4% iOS |
| 50-64 years | 23.3% | 26.1% | +2.8% Android |
| 65+ years | 0.6% | 1.0% | +0.4% Android |
Key Insight: Young adults aged 18-24 represent the largest user segment across both platforms, with Android showing stronger adoption among younger demographics.
Website Traffic Statistics
| Metric | Value |
|---|---|
| Monthly Traffic | 436.2 million |
| Daily Average Traffic | 14.5 million |
| Desktop Traffic Share | 81.63% |
| Mobile Traffic Share | 18.37% |
| Average Session Duration | 4m 58s |
| Bounce Rate | 33.73% |
| Direct Search Traffic | 61.29% |
| Organic Search Traffic | 33.56% |
| Top Social Media Referral | YouTube (59.44%) |
Key Insight: Desktop users dominate DeepSeek’s traffic at 81.63%, suggesting the platform is primarily used for professional and development work rather than casual mobile consumption.
Cost Comparison Across AI Models
| Model | Input Cost (per M tokens) | Output Cost (per M tokens) | Total Cost (I/O combined) |
|---|---|---|---|
| DeepSeek-V3 | $0.14 | $0.28 | $0.42 |
| DeepSeek-R1 | $0.55 | $2.19 | $2.74 |
| OpenAI GPT-3.5 | $0.10 | – | $0.10 |
| OpenAI GPT-4 Mini | $0.07 | $3.00 | $3.07 |
| GPT-4 | $10.00 | $30.00 | $40.00 |
| Claude-3.5-Sonnet | $30.00 | $30.00 | $60.00 |
| Gemini 1.5 Pro | $2.50 | $10.00 | $12.50 |
| Meta Llama-3.5-70B | $2.00 | $2.00 | $4.00 |
Key Insight: DeepSeek-V3 offers the most cost-effective AI processing at $0.14 per million input tokens, representing a 214x cost advantage over Claude-3.5-Sonnet.
Technical Specifications
| Specification | DeepSeek-V3 | DeepSeek-R1 | Details |
|---|---|---|---|
| Total Parameters | 671 Billion | 671 Billion | Same base architecture |
| Activated Parameters | 37 Billion | 37 Billion | Mixture of Experts |
| Context Length | 128,000 tokens | 128,000 tokens | Extended context window |
| Training Tokens | 14.8 Trillion | 14.8 Trillion | Extensive training data |
| GPU Hours (H800) | 2.788 Million | 2.788 Million | Efficient training |
| Development Cost | $5.5 Million | Under $6 Million | 1/18th of GPT-4 cost |
| Programming Languages | 80+ | 80+ | Multi-language support |
| Max Output Tokens | 8,000 | 8,000 | Single response limit |
Key Insight: DeepSeek achieved GPT-4 level performance using only $5.5 million in development costs, demonstrating remarkable cost efficiency in AI model development.
Countries and Organizations That Banned DeepSeek
| Country/Organization | Ban Type | Reason | Date |
|---|---|---|---|
| Italy | App Store Removal | Privacy Probe | January 2025 |
| Germany | App Store Block Request | Data Protection | June 2025 |
| Taiwan | Government Agencies | Security Concerns | 2025 |
| Australia | Government Agencies | Security Concerns | 2025 |
| South Korea | Key Ministries | Security Concerns | 2025 |
| US Congress | Usage Restriction | Security Concerns | 2025 |
| US Navy | Usage Restriction | Security Concerns | 2025 |
Key Insight: Western countries and its allies from the data are the only ones who felt the need to restrick DeepSeek.
Performance Benchmarks vs OpenAI
| Benchmark | DeepSeek-R1 | OpenAI-o1-1217 | Winner | Margin |
|---|---|---|---|---|
| AIME 2024 | 79.8% | 79.2% | DeepSeek | +0.6% |
| MATH-500 | 97.3% | 96.4% | DeepSeek | +0.9% |
| LiveCodeBench | 65.9% | 63.4% | DeepSeek | +2.5% |
| SWE Verified | 49.2% | 48.9% | DeepSeek | +0.3% |
| Codeforces Rating | 2029 | 2061 | OpenAI | -32 |
| Codeforces Percentile | 96.3% | 96.6% | OpenAI | -0.3% |
| Aider-Polyglot | 53.3% | 61.7% | OpenAI | -8.4% |
Key Insight: DeepSeek-R1 outperforms OpenAI in 4 out of 7 benchmarks, particularly excelling in mathematical reasoning tasks while trailing in multilingual coding capabilities.
Usage Metrics by Product (2025)
| Product | Usage Metric |
| DeepSeek LLM API | 5.7 billion API calls per month |
| DeepSeek-Coder | 1.9 billion code-generation queries in H1 2025 |
| DeepSeek-VL (Multimodal) | 980 million multimodal queries per month |
Key Insight: The general-purpose LLM API receives the highest volume of traffic, more than double the combined query volumes of the Coder and VL models.
Application Downloads by Country
| Country | Percentage of Downloads |
| China | 34% |
| India | 8% |
| Russia | 7% |
| United States | 6% |
| Pakistan | 4% |
| Brazil | 4% |
| Indonesia | 4% |
| France | 3% |
| United Kingdom | 3% |
| Other Countries | 27% |
Key Insight: While user activity is high in India and Indonesia, Russia accounts for a higher percentage of application downloads than either of them.
Content Generation Quality Metrics
| Quality Category | Benchmark / Metric | Reported Score / Performance |
| General Reasoning & Knowledge | MMLU (Massive Multitask Language Understanding) | 90.8% (DeepSeek-R1) |
| DROP (Discrete Reasoning Over Paragraphs) | 91.6% (DeepSeek-R1) | |
| ARC Challenge (AI2 Reasoning Challenge) | 80.1% (DeepSeek-Chat) | |
| GLUE Tasks (General Language Understanding Evaluation) | 92.7% Average F1-Score (DeepSeek LLM Models) | |
| Mathematical Reasoning | MATH-500 | 97.3% (DeepSeek-R1) |
| AIME-2024 | 79.8% (DeepSeek-R1) | |
| Quantitative Reasoning (NBC News Score) | 97 out of 100 (DeepSeek) | |
| Coding | HumanEval | 85.6% (DeepSeek-Coder V2.1) |
| LiveCodeBench | 65.9% (DeepSeek-R1) | |
| SWE-bench Verified | 49.2% (DeepSeek-R1) | |
| Factual Accuracy | TruthfulQA | 64.3% (DeepSeek-Chat) |
| Multimodal (Vision & Language) | VQAv2 (Visual Question Answering v2) | 87.2% (DeepSeek-VL) |
| OCR Precision (Optical Character Recognition) | 92.1% Recognition Accuracy (DeepSeek-VL) | |
| Information Retrieval | nDCG Score (Dense Retrieval) | 0.925 (DeepSeek-Embed) |
Key Insight: DeepSeek’s models demonstrate elite performance across diverse content types, achieving scores above 90% in general language understanding, mathematical problem-solving, and optical character recognition benchmarks.
Code Generation Performance
| Benchmark / Metric | Model | Reported Score |
| HumanEval (Python Code Synthesis) | DeepSeek-Coder V2.1 | 85.6% |
| LiveCodeBench (General Coding) | DeepSeek-R1 | 65.9% |
| SWE-bench Verified (Software Engineering) | DeepSeek-R1 | 49.2% |
| Codeforces (Competitive Programming) | DeepSeek-R1 | 96.3% (Percentile) |
| Developer Productivity (Enterprise Survey) | DeepSeek-Coder | 82% of developers reported higher productivity |
Key Insight: DeepSeek’s coding models excel in both standardized benchmarks, like HumanEval, and in practical application, with a significant majority of enterprise developers reporting increased productivity.
Language Support Matric
| Capability | Supported Languages |
| Natural Languages (Multimodal) | The DeepSeek-VL model supports 12 languages. |
| Localized Support | The platform has expanded to 37 countries with localized support for languages including Arabic, Swahili, and Vietnamese. |
| Programming Languages (General) | DeepSeek supports over 80 programming languages. |
| Programming Languages (Coder V2.1) | The DeepSeek-Coder V2.1 model specifically supports 32 programming languages, including COBOL and Rust. |
Key Insight: The platform provides broad language support, covering dozens of programming languages for developers.
Enterprise Integration Metrics
| Metric | Figure / Detail |
| Integrated Enterprise Accounts | Over 26,000 |
| Enterprise Suite Deployments | Deployed in over 3,200 organizations |
| Monthly API Calls | 5.7 billion |
| Onboarding Time Reduction | Developer onboarding time was cut by 42% due to improved documentation and tools in Q1 2025. |
Key Insight: DeepSeek has demonstrated successful enterprise integration at scale, evidenced by billions of monthly API calls and deployment across thousands of organizations.
Open-Source Ecosystem Contributions
| Contribution Area | Metric / Detail |
| Community Engagement | GitHub repository exceeded 170,000 stars, becoming the #1 most-starred AI project in 2025 |
| Developer Collaboration | Over 60,000 unique contributors have participated in DeepSeek projects |
| Public Datasets | Released 4 major datasets in 2025, including a 2.1 billion token multilingual corpus |
| Model Accessibility | Open LLM weight archives were downloaded 11.2 million times in the first five months of 2025 |
| Academic Impact | DeepSeek tools or datasets were cited in 38% of all new AI research papers on Arxiv in Q1 2025 |
Key Insight: DeepSeek’s impact extends beyond its products to the broader AI community, where it leads in open-source engagement, data sharing, and academic influence.
References
DataGlobeHub makes use of the best available data sources to support each publication. We prioritize sources of good reputation, like government sources, authoritative sources, expert sources, and well-researched publications. When citing our sources, we provide the report title followed by the publication name. Where not applicable, we provide just the publication name.
- Daily active users of DeepSeek – Statista
- DeepSeek explained – TechTarget
- DeepSeek AI User Statistics and Facts – GrabOn
- DeepSeek usage statistics – BytePlus
- DeepSeek AI Statistics – Sq Magazine
- 50 Latest DeepSeek Statistics – Thunderbit



