You're three weeks into a new service. Mobile wants a single endpoint that returns the user, their last five orders, and the shipping status for each. Your existing REST API would need three round trips. A teammate suggests GraphQL. Another mentions gRPC because there's a new internal service in Go. You've heard all three names a thousand times, but when you actually have to pick one, the differences matter — and they're not what most "vs" articles tell you.
Here's the honest version: REST, GraphQL, and gRPC weren't designed to compete. They were designed to solve different problems, and they each do their job well when matched to the right shape. Picking the wrong one isn't a disaster, but you'll feel friction every time you ship a feature.
What Each One Actually Is
REST isn't a protocol. It's an architectural style layered on HTTP — resources at URLs, verbs (GET, POST, PUT, DELETE) for actions, status codes for outcomes. There's no schema you have to define; you just expose endpoints and document them. Most "REST" APIs in the wild are really HTTP+JSON APIs that follow REST loosely. See MDN's HTTP overview for the foundation.
GraphQL is a query language with a strongly typed schema. The client sends a query describing exactly the fields it wants, and the server returns precisely that shape — no more, no less. The whole thing usually lives behind a single POST /graphql endpoint. The GraphQL specification governs the language; everything else (subscriptions, federation) is built on top.
gRPC is Google's RPC framework: you define services and messages in Protocol Buffers (.proto files), generate client and server stubs in your language of choice, and the framework handles transport (HTTP/2 by default), serialization (binary protobuf), and streaming. The official gRPC introduction is a fast read. Unlike REST and GraphQL, the wire format isn't human-readable.
A Sample Query, Three Ways
The clearest way to feel the difference is to look at the same operation in each style. Pretend we want a user with their two most recent orders.
REST, with the round-trip overhead:
GET /users/42 HTTP/1.1
Accept: application/json
# response
{
"id": 42,
"name": "Alex",
"email": "alex@example.com"
}
GET /users/42/orders?limit=2 HTTP/1.1
# returns array of order summaries
GET /orders/981/items HTTP/1.1
# returns items for one specific order
Three calls, three responses, lots of JSON the client doesn't need. The client sees a clean resource model but pays in latency.
GraphQL collapses it into one request:
query UserWithOrders($id: ID!) {
user(id: $id) {
name
email
orders(limit: 2) {
id
total
items {
sku
quantity
}
}
}
}
The response mirrors the query — same shape, only the requested fields. One round trip, no overfetching. The cost shows up on the server: someone has to write resolvers and avoid N+1 queries (DataLoader is the usual answer).
gRPC takes a different posture. You declare the shape ahead of time:
syntax = "proto3";
service UserService {
rpc GetUserWithOrders (UserRequest) returns (UserWithOrders);
}
message UserRequest {
int32 id = 1;
int32 order_limit = 2;
}
message UserWithOrders {
int32 id = 1;
string name = 2;
string email = 3;
repeated Order orders = 4;
}
message Order {
int32 id = 1;
double total = 2;
repeated Item items = 3;
}
A client call looks like a normal function in your language — client.GetUserWithOrders({id: 42, order_limit: 2}) — because the codegen made it one. The wire format is binary, typically 30–50% smaller than equivalent JSON, and HTTP/2 multiplexing means many calls share one connection.
Network Footprint and Latency
This is where intuition fails most often. People assume "GraphQL = fast" or "gRPC = always smaller." The reality is more nuanced.
REST is verbose on the wire (JSON, often pretty-printed in dev, headers on every call) but every layer of the internet is optimized for it. Caching at the CDN, browser, and proxy level just works because URLs are cache keys and GET is idempotent. For public APIs serving cacheable reads, REST routinely beats both alternatives in real-world latency.
GraphQL eliminates round trips and overfetching but its single-endpoint, POST-based model breaks HTTP caching by default. You either give it up, build persisted queries (hash-keyed GET requests), or stand up a GraphQL-aware cache like Apollo's. The wire is still JSON, so payload sizes are similar to REST per byte — the win is in fewer requests, not smaller ones.
gRPC's binary protobuf is the most compact of the three, and HTTP/2 lets dozens of calls share a single TCP connection. For internal east-west traffic between services, the overhead reduction is real and measurable. But: gRPC doesn't run natively in browsers. You need gRPC-Web with a proxy, and you lose streaming features and some of the size win. Cloudflare's primer on QUIC and HTTP/3 is a great read for understanding why HTTP/2 multiplexing matters in the first place.
Schemas, Types, and Tooling
REST has no built-in schema. You can bolt on OpenAPI (formerly Swagger) and get most of what GraphQL and gRPC give you for free — generated docs, typed clients, contract tests — but it's optional and inconsistent across teams. Run a JSON Schema Validator on a real-world REST response and you'll find the documented schema and the actual response disagree more often than you'd hope.
GraphQL ships with a schema you can't avoid. Introspection lets clients (and tooling like GraphiQL) discover every type and field at runtime. The IDE experience is genuinely excellent — autocomplete, type checking, and inline docs in your editor with almost no setup. When you change a field, breaking changes are visible immediately. Use a GraphQL Formatter when you're spelunking through unfamiliar queries.
gRPC's schema is the .proto file, and the tooling is impressive: code generation for ~12 languages, backward compatibility rules baked into the protocol (add fields, never reuse tag numbers), and a binary format you can decode if you have the schema. The downside is operational: you need a build step in every client and server to regenerate stubs when the schema changes. For teams that don't already have a polyglot build pipeline, this is friction.
When Each One Is the Right Default
A practical decision tree, based on what you'll regret in six months:
- Public-facing API for third-party developers: REST. The world knows how to consume it. Curl works. Postman works. Caching works. You can publish OpenAPI for the people who want types. GraphQL public APIs exist (GitHub, Shopify) but they're more work to support.
- Internal microservices, especially polyglot: gRPC. Strict contracts, fast wire format, HTTP/2 multiplexing, generated clients. The build-step cost is amortized across many services.
- Single-page app or mobile client with complex, evolving data needs: GraphQL. One endpoint, the client picks fields, and product velocity stays high as the UI changes. Pair it with persisted queries to get caching back.
- Bandwidth-constrained mobile or IoT: gRPC if you can run it; otherwise REST with aggressive payload trimming. JSON is just heavy.
- Streaming or bidirectional traffic: gRPC. It supports server, client, and bidirectional streaming natively. REST has SSE and WebSockets, which work but are awkward to schema. GraphQL has subscriptions, but they're typically WebSocket-based and have their own quirks.
You can mix them. Plenty of mature systems run gRPC between services, expose a GraphQL gateway for the frontend, and offer a REST surface for partners. That isn't architectural indecision — it's matching each protocol to its strength.
Common Failure Modes
Each style has a well-worn set of mistakes that show up around month three.
REST: chatty endpoints. The client makes 8 calls to render one screen because each resource is too granular. The fix is usually compound endpoints (/users/42/dashboard) or a backend-for-frontend layer — at which point you're rebuilding GraphQL by hand. If your REST API is sprawling, the fundamentals in REST API Design Best Practices are the place to start.
GraphQL: N+1 queries on the resolver side. A query asking for 50 users with their addresses runs 1 user query plus 50 address queries unless you batch with DataLoader. Also, query complexity attacks — a malicious client can craft a deeply nested query that runs for minutes. You need cost analysis and depth limits in production.
gRPC: leaking it directly to the browser. Without gRPC-Web and a proxy, browser clients can't talk to gRPC services at all. Teams that don't realize this end up writing a REST shim on top of gRPC, which defeats most of the benefit. The other trap is treating .proto files like throwaway code; once a service ships, the schema is a long-term commitment.
A Practical Checklist Before You Pick
Before you commit to a style for a new service, walk through this:
- Who is the client? Browser, mobile app, third-party developer, internal Go service? Browsers narrow you to REST or GraphQL. Internal services open up gRPC.
- How chatty is the read pattern? If a single screen needs three resources joined, GraphQL pays for itself fast.
- How much do you care about CDN caching? REST is the only one that gets it for free.
- Do you have a build pipeline? gRPC needs codegen to be pleasant. If your dev loop is "edit and refresh," REST or GraphQL fits better.
- What's the team's familiarity? A team that has shipped REST for years and never touched protobuf will move faster on REST even if gRPC is theoretically better. Friction compounds.
- Is the data graph-shaped? Heavy relationships with many ways to traverse them favor GraphQL. Flat resource collections favor REST.
When you're debugging any of these — formatting REST responses, reading GraphQL errors, decoding protobuf payloads — the basics still apply. Pretty-print payloads with a JSON Formatter and reach for an HTTP Request Builder when you need to experiment with headers and bodies before wiring code.
The Honest Takeaway
There's no winner. There's "right tool for this client, this team, this read pattern." The teams that ship cleanest are the ones who match the protocol to the problem instead of picking based on what's trending. REST is still the safe default for most public APIs. GraphQL is the right answer when one client renders many shapes of data. gRPC quietly powers a lot of the internal traffic at companies you've heard of, and for good reason.
Pick the one whose strengths align with the friction you're trying to remove — and don't be afraid to use more than one inside the same system.
FAQ
Is REST really still the default in 2026, or has GraphQL taken over?
REST is still the dominant style for public-facing APIs in 2026, and the gap is wider than the discourse suggests. GitHub, Stripe, and AWS still ship REST as their primary surface, with GraphQL added alongside for specific high-traffic flows. The reason is unchanged: every CDN, browser, proxy, and HTTP library on Earth speaks REST natively, and the operational cost of "just an HTTP endpoint" is hard to beat.
Why does my GraphQL endpoint have worse latency than the REST equivalent?
Almost always one of three reasons: N+1 resolver queries (50 user lookups instead of 1 batched query), no CDN caching because every query is a POST, or query parsing overhead on a cold path. DataLoader fixes the first, persisted queries with hashed GET URLs fix the second, and pre-validation of operation names fixes the third. If none of those apply, profile the resolver tree — usually one field is doing 80% of the work.
Can I run gRPC from a browser without a Node proxy?
Not directly — browsers cannot speak HTTP/2 trailers, which raw gRPC depends on for status codes. You need gRPC-Web with an Envoy or grpc-web-proxy translator in front of your gRPC server. That gets you unary calls and server streaming, but not client streaming or bidirectional streaming. If you need full streaming in the browser, you are better off with WebSockets or Server-Sent Events.
Which is fastest on the wire — JSON, GraphQL, or protobuf?
For equivalent payloads, protobuf is typically 30–50% smaller than JSON on the wire because field names are encoded as integer tags and numeric values use varint encoding. JSON-over-HTTP/2 with gzip compression closes most of that gap for text-heavy payloads — sometimes within 10%. The bigger latency win for gRPC isn't the bytes, it's HTTP/2 multiplexing letting dozens of calls share one TCP connection without head-of-line blocking.
What's the difference between GraphQL subscriptions and gRPC streaming?
GraphQL subscriptions run over WebSockets (or Server-Sent Events) and use the same schema as queries — you subscribe to a field that returns a stream of typed events. gRPC streaming is built into the protocol itself and supports four modes: unary, server-streaming, client-streaming, and bidirectional. gRPC streams are more efficient (binary protobuf, HTTP/2 frames) but require a non-browser client; GraphQL subscriptions work in any browser but carry WebSocket framing overhead.
Should I use GraphQL Federation or a BFF (Backend-for-Frontend)?
Federation is the right answer when you have many independent teams owning subgraphs and need a single unified schema. The cost is real: you need a gateway, schema composition tooling, and a discipline around entity ownership. A BFF is simpler — one team owns one tailored API per client — and works well for a small org. Most companies should start with a BFF and only adopt federation when team count and schema sprawl actually demand it.
How do I version a GraphQL or gRPC API without breaking clients?
GraphQL discourages versioning entirely — instead, you add fields freely and deprecate old ones with @deprecated directives. Clients only request what they need, so adding fields is non-breaking. gRPC uses tag numbers in .proto files: never reuse a tag, never change a field type, and treat removed fields as reserved. Both approaches beat REST's "v1 / v2 in the URL" model in practice, but only if you actually follow the rules.
Is GraphQL still worth the complexity for a small team?
Probably not, unless your data is genuinely graph-shaped or you have a single complex client (a SPA or mobile app) that needs to compose many shapes from one endpoint. For a team of 3–5 shipping a CRUD app, REST plus OpenAPI gives you typed clients, generated docs, and CDN caching for free. The GraphQL tax — resolvers, dataloaders, depth limits, persisted queries — pays back at scale, not at the start.