Client-Server Model

Almost every web system you will ever build is a client-server architecture. Understanding the roles, the boundary, and the cost of each round trip is the foundation everything else sits on.

The two halves of every system

The client-server model splits a system into two roles. The client makes requests. The server answers them. This sounds obvious, but the implications run deep. Almost every architectural decision you will make traces back to this split.

The client is whatever the user touches. A browser, a mobile app, a desktop app, even another server acting as a client to a third service. Its job is to render an interface, capture user intent, and send it across the network. The client trusts nothing it has not verified itself.

The server lives somewhere else. It holds the source of truth: the database, the business logic, the authoritative state. It does not trust the client. Every request is a stranger that must prove its identity and intent.

CLIENT Browser / Mobile UI · Render · Validate Cache · Local state SERVER Backend Logic · Auth · Storage Source of truth Request Response Network: latency · cost · failure
The client-server boundary. Every request crosses an unreliable network and pays in latency.

The boundary is everything

The line between client and server is the most expensive line in your system. Crossing it costs latency (often tens of milliseconds), can fail at any time, and exposes you to security threats. Senior engineers think hard about what crosses this line and how often.

Three rules I drill into every junior engineer:

  1. Minimize round trips. A page that needs 50 sequential API calls will feel slow no matter how fast each call is. Batch, prefetch, or restructure.
  2. Never trust the client. Client-side validation is for UX. Server-side validation is for correctness and security. Both, always.
  3. Push computation to where the data lives. If you need to filter 10,000 records to 10, do it on the server, not by sending all 10,000 to the client.

Stateful vs stateless servers

A stateful server remembers things between requests. Session state, in-memory caches, user-specific data. A stateless server does not. Each request brings everything the server needs to serve it.

Stateless wins in 99% of modern web architectures because it scales horizontally for free. Any server can serve any request, so a load balancer can route freely. State, when needed, lives in shared infrastructure: databases, caches, session stores. The "server" is stateless; the system as a whole has state, but in dedicated places.

Variations on the theme

Peer-to-peer. No fixed server, every node is both client and server. BitTorrent, blockchain. Resilient but harder to coordinate.

Push (server-initiated). Normally clients pull. WebSockets and Server-Sent Events let the server push, useful for real-time updates. We cover this in detail later.

Backend for Frontend (BFF). A server that exists specifically to shape data for one type of client. The mobile BFF returns lighter payloads than the web BFF, even though both talk to the same downstream services.

Where new engineers go wrong Treating the network as if it's local memory. It is not. A network call is roughly a million times slower than a local function call and can fail. Code that loops over a list calling a remote API for each element is a textbook anti-pattern called the N+1 problem. Always batch.

Once you internalize the client-server boundary as the most expensive thing in your design, you start making different decisions automatically. You batch. You cache. You design APIs to return more data per call. You think before each round trip. That is the mental model that scales with you.