Month stringlengths 6 14 | Model Name stringlengths 2 30 | Company stringlengths 3 48 | URL stringlengths 17 103 | Description stringlengths 13 394 ⌀ |
|---|---|---|---|---|
April 2026 | Claude Opus 4.7 | Anthropic | https://www.anthropic.com/news/claude-opus-4-7 | most capable generally available model; scores 87.6% on SWE-bench Verified and 64.3% on SWE-bench Pro; excels at long-horizon agentic tasks complex multi-step coding and professional knowledge work; adds high-resolution vision (2,576px); new adaptive thinking and configurable effort levels including xhigh mode; 1M toke... |
April 2026 | Gemma 4 | Google DeepMind | https://deepmind.google/models/gemma/ | four open-weight variants (E2B through 31B Dense) released April 2 under Apache 2.0; 31B Dense scores 80% on LiveCodeBench v6 and 89.2% on AIME 2026; natively multimodal with function calling and agentic workflow support; Codeforces ELO jumped from 110 in Gemma 3 to 2,150 |
April 2026 | Meta Muse Spark | Meta | https://ai.meta.com/muse-spark/ | released April 8 from Meta Superintelligence Labs; Meta's first proprietary (non-open-source) model; scores 52 on the Artificial Analysis Intelligence Index (vs. Llama 4 Maverick's 18); leads all models on CharXiv Reasoning at 86.4%; available free on meta.ai in Instant and Thinking modes |
March 2026 | GPT-5.4 | OpenAI | https://openai.com/index/introducing-gpt-5-4/ | most capable and efficient frontier model for professional work; first mainline model to incorporate GPT-5.3-Codex's coding capabilities; adds native computer use (75% on OSWorld surpassing the 72.4% human expert baseline); 1M token context window; scores 83% on GDPval and ~80% on SWE-bench Verified; rolled out March 5 |
March 2026 | GPT-5.4 mini & nano | OpenAI | https://openai.com/index/introducing-gpt-5-4-mini-and-nano/ | smaller efficient variants released March 17; mini approaches GPT-5.4-level coding performance at ~6x lower cost; nano targets classification data extraction and lightweight coding subagents; both optimized for fast iteration in coding workflows |
March 2026 | NVIDIA Nemotron 3 Super | NVIDIA | https://blogs.nvidia.com/blog/nemotron-3-super-agentic-ai/ | 120B parameter open-weight hybrid Mamba-Transformer MoE (12B active per token) released at GTC March 11; sets new open-weight record of 60.47% on SWE-bench Verified; 1M token context window; 5x higher throughput than prior generation; fully open — weights datasets and training recipes released |
March 2026 | Mistral Small 4 | Mistral AI | https://mistral.ai/news/mistral-small-4 | 119B parameter MoE (6B active per token) released March 16; unifies Magistral (reasoning) Pixtral (multimodal) Devstral (agentic coding) and Mistral Small (instruct) into a single model; configurable reasoning effort per request; 256K context window; Apache 2.0 license |
March 2026 | GLM-5.1 | Z.ai (Zhipu AI) | https://huggingface.co/zai-org/GLM-5.1 | coding-optimized iteration of GLM-5 (744B MoE 40B active) released March 27; achieves 94.6% of Claude Opus 4.6's coding benchmark performance; MIT license; GLM Coding Plan starts at $3/month |
February 2026 | Gemini 3.1 Pro | Google DeepMind | https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/ | upgraded core intelligence for the Gemini 3 series; scores 77.1% on ARC-AGI-2 (more than double Gemini 3 Pro) and 80.6% on SWE-bench Verified; 1M token context window with 65K output; ranked #1 on Artificial Analysis Intelligence Index at launch |
February 2026 | Claude Sonnet 4.6 | Anthropic | https://www.anthropic.com/news/claude-sonnet-4-6 | leads GDPval-AA Elo benchmark for real expert-level work with 1,633 points; preferred over previous Sonnet 70% of the time in Claude Code testing; 1M token context window (beta); default model on Claude.ai free and pro plans and powering GitHub Copilot's coding agent |
February 2026 | GPT-5.3-Codex-Spark | OpenAI | https://openai.com/index/introducing-gpt-5-3-codex-spark/ | research preview of OpenAI's first model designed for real-time ultra-fast coding; powered by Cerebras Wafer Scale Engine 3 delivering more than 1,000 tokens per second; 58.4% accuracy on Terminal-Bench 2.0; 128k context window |
February 2026 | Qwen3.5 | Alibaba Cloud | https://qwen.ai/blog?id=qwen3.5 | 397B parameter native vision-language model with only 17B active per forward pass via hybrid linear attention and sparse MoE; 76.4% on SWE-bench Verified 52.5% on Terminal-Bench 2; 1M context window; multilingual support expanded from 119 to 201 languages; 8.6x decoding throughput vs. prior generation |
February 2026 | Zhipu AI GLM-5 | Z.ai (Zhipu AI) | https://docs.z.ai/guides/llm/glm-5 | flagship MoE model with 745B total parameters (44B active) designed for Agentic Engineering; achieves SOTA performance for open-source models narrowing the gap with Claude Opus 4.5; 200k token context window; MIT license; trained on Huawei Ascend infrastructure |
February 2026 | MiniMax 2.5 | MiniMax | https://platform.minimax.io/docs/guides/models-intro | peak-performance model optimized for end-to-end developer workflows including multi-file edits and test-validated repairs; 80.2% score on SWE-Bench; 37% faster than comparable frontier models; 200k context window; thinking mode for complex logic |
February 2026 | Claude Opus 4.6 | Anthropic | https://www.anthropic.com/news/claude-opus-4-6 | improved coding skills including better planning sustained agentic tasks operation in larger codebases and enhanced code review and debugging; first Opus-class model with 1M token context window (beta); SOTA on Terminal-Bench 2.0 Humanity's Last Exam GDPval-AA and BrowseComp |
February 2026 | GPT-5.3-Codex | OpenAI | https://openai.com/index/introducing-gpt-5-3-codex/ | most capable agentic coding model combining GPT-5.2-Codex coding performance with GPT-5.2 reasoning capabilities in a single model that's 25% faster; handles long-running tasks involving research tool use and complex execution; first OpenAI model to help create itself |
January 2026 | SERA-32B | Ai2 | https://huggingface.co/allenai/SERA-32B | first model in Ai2's Open Coding Agents series; achieves 49.5% on SWE-bench Verified; trained using Soft Verified Generation (SVG) 26x cheaper than RL and 57x cheaper than previous synthetic data methods; total training cost approximately $2,000 (40 GPU-days) |
January 2026 | Kimi K2.5 | Moonshot AI | https://huggingface.co/moonshotai/Kimi-K2.5 | Open-Source Visual Agentic Intelligence; Global SOTA on Agentic Benchmarks: HLE full set (50.2%) BrowseComp (74.9%); Open-source SOTA: MMMU Pro (78.5%) VideoMMMU (86.6%) SWE-bench Verified (76.8%); Agent Swarm (Beta): up to 100 sub-agents 1,500 tool calls 4.5× faster vs. single-agent |
January 2026 | GLM-4.7-Flash | Z.ai | https://huggingface.co/zai-org/GLM-4.7-Flash | a local coding and agentic assistant setting a new standard for the 30B class; balancing high performance with efficiency; also recommended for creative writing translation long-context tasks and roleplay |
December 2025 | M2.1 | MiniMax | https://huggingface.co/MiniMaxAI/MiniMax-M2.1 | open-source AI model with 10 billion activated parameters (230 billion total); scores 74.0 on SWE-bench Verified and 91.5 on VIBE-Web benchmarks; excels in multi-language programming (Rust Java Go C++ TypeScript) and UI development |
December 2025 | GLM-4.7 | Z.ai | https://huggingface.co/zai-org/GLM-4.7 | optimized for AI coding assistance; major improvements over GLM-4.6 including 5.8% gain on SWE-bench and 12.9% on multilingual coding; improvements in UI/webpage generation tool usage and complex reasoning |
December 2025 | GPT-5.2-Codex | OpenAI | https://openai.com/index/introducing-gpt-5-2-codex/ | most advanced agentic coding model for complex real-world software engineering; optimized version of GPT‑5.2 for agentic coding in Codex; improvements on long-horizon work through context compaction stronger performance on refactors and migrations; significantly stronger cybersecurity capabilities |
December 2025 | Gemini 3 Flash | Google | https://blog.google/products/gemini/gemini-3-flash/ | delivers high-speed pro-grade reasoning and outperforms even the Pro model in coding benchmarks; ideal for low-latency agentic workflows and complex multimodal tasks like video analysis and real-time data extraction |
December 2025 | GPT‑5.2 Thinking | OpenAI | https://openai.com/index/introducing-gpt-5-2/ | sets a new state of the art of 55.6% on SWE-Bench Pro; can more reliably debug production code implement feature requests refactor large codebases and ship fixes end-to-end with less manual intervention |
December 2025 | Devstral 2 | Mistral AI | https://mistral.ai/news/devstral-2-vibe-cli | next-generation coding model family in two sizes: Devstral 2 (123B) and Devstral Small 2 (24B); sets the open SOTA for code agents; Devstral 2 ships under modified MIT license Devstral Small 2 under Apache 2.0 |
December 2025 | rnj-1-instruct | Essential AI | https://huggingface.co/EssentialAI/rnj-1-instruct | trained from scratch and optimized for code and STEM with capabilities on par with SOTA open-weight models; strong agentic capabilities (e.g. inside agentic frameworks like mini-SWE-agent); excels at tool-calling |
November 2025 | Claude Opus 4.5 | Anthropic | https://www.anthropic.com/news/claude-opus-4-5 | intelligent efficient and the best model in the world for coding agents and computer use; meaningfully better at everyday tasks like deep research and working with slides and spreadsheets |
November 2025 | GPT-5.1-Codex-Max | OpenAI | https://openai.com/index/gpt-5-1-codex-max/ | update to foundational reasoning model trained on agentic tasks across software engineering math research and more; faster more intelligent and more token-efficient |
November 2025 | Gemini 3 | Google | https://blog.google/technology/developers/gemini-3-developers/ | most intelligent model delivers unparalleled results across every major AI benchmark; surpasses 2.5 Pro at coding mastering both agentic workflows and complex zero-shot tasks |
November 2025 | Doubao-Seed-Code | ByteDance Volcengine | https://news.aibase.com/news/22712 | achieves breakthroughs in performance price and migration cost; deeply integrated with the TRAE development environment |
November 2025 | GPT-5-Codex-Mini | OpenAI | https://x.com/OpenAIDevs/status/1986861736041853368?s=20 | allows roughly 4x more usage than GPT-5-Codex at a slight capability tradeoff due to the more compact model |
November 2025 | Mercury Coder | Inception Labs | https://docs.inceptionlabs.ai/get-started/models | dLLM optimized to accelerate coding workflows; streaming tool use and structured output with 128K context window |
October 2025 | Composer | Cursor | https://cursor.com/blog/2-0 | 4x faster than similarly intelligent models and built for low-latency agentic coding |
October 2025 | SWE-1.5 | Windsurf Cognition | https://cognition.ai/blog/swe-1-5 | a fast-agent frontier-size model with hundreds of billions of parameters that achieves near-SOTA coding performance; 6x faster than Haiku 4.5 and 13x faster than Sonnet 4.5 |
October 2025 | CoDA-1.7B | Salesforce AI Research | https://huggingface.co/Salesforce/CoDA-v0-Base | diffusion-based language model designed for powerful code generation and bidirectional context understanding |
October 2025 | KAT-Dev-72B-Exp | Kawaipilot | https://huggingface.co/Kwaipilot/KAT-Dev-72B-Exp | an open-source 72B-parameter model for software engineering tasks; achieves 74.6% accuracy on SWE-Bench Verified when evaluated strictly with the SWE-agent scaffold |
September 2025 | Code World Model (CWM) | AI at Meta | https://huggingface.co/facebook/cwm | LLM for code generation and reasoning trained to better represent and reason how code and commands affect the state of a program or system |
September 2025 | DeepSeek-V3.2-Exp | DeepSeek | https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp | experimental sparse-attention upgrade that halves inference cost while retaining strong code-generation and long-context reasoning |
September 2025 | GLM-4.6 | Z.ai | https://huggingface.co/zai-org/GLM-4.6 | features a longer context window superior coding performance advanced reasoning more capable agents and refined writing versus GLM-4.5 |
September 2025 | Claude Sonnet 4.5 | Anthropic | https://www.anthropic.com/news/claude-sonnet-4-5 | the strongest model for building complex agents the best model at using computers; shows substantial gains on tests of reasoning and math |
September 2025 | Qwen3-Max-Instruct | Alibaba Cloud | https://qwen.ai/blog?id=241398b9cd6353de490b0f82806c7848c5d2777d&from=research.latest-advancements-list | the official release further elevates its capabilities — particularly in coding and agent performance |
September 2025 | GPT‑5-Codex | OpenAI | https://openai.com/index/introducing-upgrades-to-codex/ | a version of GPT‑5 further optimized for agentic coding in Codex and trained with a focus on real-world software engineering work |
September 2025 | Kimi K2-Instruct-0905 | Moonshot AI | https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905 | updated SOTA model with improved agentic and frontend capabilities and increased context length |
August 2025 | GPT-5 | OpenAI | https://platform.openai.com/docs/models | flagship model |
August 2025 | GPT-5-mini | OpenAI | https://platform.openai.com/docs/models | fast/cost efficient |
August 2025 | GPT-5-nano | OpenAI | https://platform.openai.com/docs/models | faster/cost efficient |
August 2025 | Claude Opus 4.1 | Anthropic | https://www.anthropic.com/claude/opus | a drop-in replacement for Opus 4 |
August 2025 | Mistral Medium 3.1 | Mistral AI | https://x.com/MistralAI/status/1955316715417382979 | aka Mistral-Medium-2508 - enterprise-grade model excels in coding tasks |
August 2025 | Grok Code Fast 1 | xAI | https://x.ai/news/grok-code-fast-1 | a speedy and economical reasoning model that excels at agentic coding efficient code generation and execution |
July 2025 | Qwen3-Coder | Alibaba Cloud | https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct | agentic code model |
July 2025 | Qwen3-Coder-Flash | Alibaba Cloud | https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct | streamlined non thinking agentic code model |
July 2025 | Kimi K2 | Moonshot AI | https://moonshotai.github.io/Kimi-K2/ | 1 T-param MoE |
July 2025 | GLM-4.5 | Z.ai | https://z.ai/blog/glm-4.5 | An open-source LLM designed for intelligent agents |
July 2025 | Codestral 25.08 | Mistral AI | https://mistral.ai/news/codestral | code model for high-precision fill-in-the-middle (FIM) completion |
July 2025 | Devstral Medium 2507 | Mistral × All Hands AI | https://mistral.ai/news/devstral-2507 | high-quality and cost-effective model |
July 2025 | Devstral Small 1.1 2507 | Mistral × All Hands AI | https://mistral.ai/news/devstral-2507 | agentic model |
July 2025 | Grok 4 | xAI | https://x.ai/grok | trained with reinforcement learning for native tool use including code interpreters making it highly capable for coding and advanced reasoning tasks |
June 2025 | Gemini 2.5 Pro | Google DeepMind | https://deepmind.google/models/gemini/pro/ | flagship model |
June 2025 | Gemini 2.5 Flash | Google DeepMind | https://deepmind.google/models/gemini/pro/ | fast/cost efficient with thinking capabilities |
May 2025 | Claude Opus 4 | Anthropic | https://www.anthropic.com/claude/opus | pushes the frontier in coding agentic search and creative writing |
May 2025 | Claude Sonnet 4 | Anthropic | https://www.anthropic.com/claude/sonnet | improves on Claude Sonnet 3.7 across a variety of areas especially coding |
May 2025 | DeepSeek-R1-0528 | DeepSeek | https://huggingface.co/deepseek-ai/DeepSeek-R1-0528 | OSS reasoning model |
April 2025 | o3 | OpenAI | https://platform.openai.com/docs/models | preview reasoning model |
April 2025 | o4-mini | OpenAI | https://platform.openai.com/docs/models | compact model |
April 2025 | GPT-4.1 | OpenAI | https://platform.openai.com/docs/models | flagship model with 1M token context window |
April 2025 | Llama 4 Maverick | Meta | https://llama.meta.com/get-started/ | code-tuned model |
April 2025 | Llama 4 Scout | Meta | https://llama.meta.com/get-started/ | open-weight model |
April 2025 | Mellum | JetBrains | https://huggingface.co/JetBrains/Mellum-4b-base | 4-B param OSS model |
March 2025 | DeepSeek-V3-0324 | DeepSeek | https://huggingface.co/deepseek-ai/DeepSeek-V3-0324 | improved V3 version |
February 2025 | Gemini 2.0 Flash | Google DeepMind | https://blog.google/technology/google-deepmind/gemini-model-updates-february-2025/ | multimodal for high-volume high-frequency tasks |
February 2025 | Claude 3.7 Sonnet | Anthropic | https://www.anthropic.com/news/claude-3-7-sonnet | first hybrid reasoning model and state-of-the art for coding |
February 2025 | Grok 3 | xAI | https://x.ai/grok | coding capable model |
SOURCE | joylarkin/AI-Coding-Landscape | https://github.com/joylarkin/AI-Coding-Landscape | The 2026 AI Coding Landscape - Coding agents, CLIs, IDEs, AI app builders, devtools, and more | null |
Dataset Card for 2026 AI Coding Models
- Last Updated: 20 April 2026
- Curated By: Joy Larkin
- Language(s) (NLP): English
- License: MIT
- Repository: https://github.com/joylarkin/AI-Coding-Landscape
- Blog: https://cleverhack.com/ai-coding-landscape
Dataset Description
CSV file of AI Coding Models released in 2026 & 2025.
- Downloads last month
- 123