- The Ravit Show
- Posts
- A Benchmark Built for the Real World: Firebolt Fires Up the Numbers with FireScale
A Benchmark Built for the Real World: Firebolt Fires Up the Numbers with FireScale
Benchmarks. We’ve all seen them. They’re splashy, often one-sided, and usually prompt one big question: “Should I believe these results?”
That’s exactly what I asked Cole Bowden (Developer Advocate at Firebolt) and Igor Stanko (Chief Product Officer at Firebolt) when they joined me on The Ravit Show this week. Their answer? "Don't take our word for it, run it yourself." And with FireScale, you actually can.
So… why FireScale?
Igor and Cole walked me through the thinking behind FireScale, and I’ll be honest, I was impressed by the rigor. Unlike many industry-standard benchmarks that run simplified queries in isolation, FireScale is modeled on real production workloads seen across Firebolt customers.
It uses an extended AMPLab dataset to simulate multidimensional queries with joins, CTEs, window functions, and subqueries. And it tests both single-query latency and high-concurrency throughput, just like real customers experience. Oh, and it's 100% reproducible via GitHub.
The most interesting part? FireScale doesn’t just focus on synthetic performance. It looks at structural and operational complexity attributes like concurrency, data volume, and cost-efficiency under pressure. That’s what makes it real-world and different.
Let’s Talk Numbers
Once we dug into the results, one thing became clear: Firebolt isn’t just claiming better performance, they're showing it! And the numbers speak for themselves!
Time vs. Cost

All Firebolt engines delivered the fastest completion times at the lowest cost, leaving Snowflake, Redshift, and BigQuery far behind. For companies building data-intensive AI applications, this means faster model training, quicker feedback loops, and the ability to serve users with fresher, more relevant insights without burning through compute budgets.
Workload Completion Times

Even Firebolt's smallest engine beat all Snowflake, Redshift, and BigQuery configurations on raw query speed. Firebolt was 3.7x faster than Snowflake, 6x-16x faster than Redshfit and 6.5x faster than BigQuery. This level of performance at smaller footprints gives teams flexibility, whether you’re a fast-scaling startup or an enterprise trying to optimize costs while moving quickly.
Lowest Cost per Workload

And when it comes to cost-efficiency? Firebolt showed 8x better price-performance than Snowflake, 18x over Redshift, and 90x over BigQuery. That’s not just impressive, it’s transformative for companies who want to scale their AI workloads without incurring massive infrastructure spend at the same rate.
Snowflake 37.5x More Expensive

To match Firebolt's performance, Snowflake turns out to be 37.5x more expensive. Yep, you read that right. For teams managing large data volumes or AI inference at scale, this cost difference could be the line between sustainable growth and ballooning budgets.
Concurrency That Scales
When I asked about multi-user performance, Igor showed me the Concurrency Run results, and they were just as compelling.
Higher Concurrency Throughput

For equivalently priced configurations, Firebolt achieved ~1700 QPS which is 5.5x higher than Snowflake and 10x higher than Redshift. That means companies can support more simultaneous AI agents, end users, or analytical queries without degrading performance.
Firebolt Concurrency Scaling

Firebolt also showed near-linear scaling when adding clusters, delivering 2,500 QPS at 120ms latency. While Snowflake also showed near-linear sclaing, it topped out at 640 QPS, just a quarter of Firebolt, whereas Redshift failed to demonstrate any significant gains in concurrency. Firebolt makes it much easier to plan for growth: need to double your QPS? Just scale up or scale out your Firebolt engine while keeping latency low and predictable.
In today’s world, where AI agents and user-facing data apps are creating spikes in query demand, this kind of concurrency handling isn’t a nice-to-have, it’s critical.
A Real Demo, Not Just a Slide
Cole didn’t just talk about the benchmark, he showed it live in action. The FireScale demo made it clear how accessible and open this benchmark really is. Anyone can run it themselves using the public GitHub repo, and see how their data and analytics workloads fare in the benchmark.
From query execution and concurrency testing, FireScale gives you a transparent, flexible framework to evaluate performance in real-world conditions. That’s not something you see every day in benchmark land.
My Takeaway
If you're building AI-driven, high-concurrency data apps, benchmarks like FireScale matter. They go beyond theoretical testing and into practical performance, showing exactly what modern data teams need to evaluate real solutions.
The Firebolt team didn’t just build a better benchmark, they’re inviting the entire community to test it, challenge it, and make it better.
If you missed the episode, check it out here. You’ll see why Firebolt is making waves with FireScale.
🚀 Try Firebolt and Explore FireScale
Want to experience these results for yourself? Sign up for Firebolt's free trial and see how it performs on your own workloads. And here are some more resources to dig deeper into FireScale, happy benchmark testing!
Read the launch blog: Introducing FireScale - A Benchmark for High Performance and High Concurrency Analytics Workloads
Dive into the technical breakdown: The Process of Running FireScale Benchmarks
Explore the GitHub repo
🔍 Stay Ahead in AI & Data! Join 137K+ Data & AI professionals who stay updated with the latest trends, insights, and innovations.
📢 Want to sponsor or support this newsletter? Reach out and let's collaborate! 🚀v
Best,
Ravit Jain
Founder & Host of The Ravit Show