gRPC in Go: real-time streaming for microservices

Three internal Go services need real-time crypto prices. The first implementation: REST polling against a homemade aggregator, every second. Result: 3 services × 60 req/min = 180 requests per minute for data that changes every 200 ms. Latency anywhere from 500 ms to 1 s depending on where you land in the polling cycle. And that's before counting JSON serialization overhead on every round-trip.

SSE would have worked — I have a dedicated article on it. But SSE is designed to push events to a browser. Service-to-service, the unidirectional HTTP/1.1 protocol brings more constraints than benefits. WebSockets are bidirectional, but stateful, complex to manage at scale, and there's no data contract — anyone sends anything.

gRPC server streaming solves exactly this. One persistent HTTP/2 connection. The server pushes updates as they arrive. The contract is defined in a .proto file — versioned, typed, with generated code on both sides. That's the right tool.

REST vs gRPC: the real comparison

Before diving into code, an honest comparison. Not the marketing bullet points, but the real questions you'll ask yourself when choosing between the two.

REST / JSON gRPC / Protobuf
Format JSON (text) Protobuf (binary)
Contract OpenAPI (optional) .proto (mandatory)
Streaming Not native (SSE / WS as add-on) Native (4 modes)
Payload size ~1x (baseline) ~3–10x smaller
Browser ✅ native ⚠️ gRPC-web only
Code generation Optional Mandatory
Debugging curl, Postman grpcurl, Evans

Honest take: REST is the obvious answer for public APIs and browser clients. gRPC becomes the right choice when both ends are internal Go services, you need streaming, and you want strict contracts between teams without maintaining an OpenAPI spec by hand.

Defining the contract in Protobuf

Everything starts with the .proto file. It's the shared source of truth between the server and its clients. You define the messages first — the typed equivalent of JSON bodies — then the service and its methods.

syntax = "proto3";
package pricefeed;
option go_package = "./pb";

message PriceUpdate {
    string pair      = 1;  // "BTC/USDT"
    string exchange  = 2;  // "binance"
    double bid       = 3;
    double ask       = 4;
    int64  timestamp = 5;  // unix millis
}

message SubscribeRequest {
    repeated string pairs = 1;  // ["BTC/USDT", "ETH/USDT"]
}

service PriceFeed {
    // Server streaming: client subscribes, server pushes updates
    rpc Subscribe(SubscribeRequest) returns (stream PriceUpdate);

    // Unary: get last known price for a pair
    rpc GetLatest(SubscribeRequest) returns (PriceUpdate);
}

The stream keyword before the return type signals a server streaming RPC. The client sends a single request (SubscribeRequest) and receives an indefinite stream of PriceUpdate messages until the connection is closed — by the server, by the client, or by a context timeout.

Once the file is written, protoc generates the corresponding Go code:

protoc --go_out=. --go_opt=paths=source_relative \
       --go-grpc_out=. --go-grpc_opt=paths=source_relative \
       pricefeed.proto

Result: two Go files in ./pb/ — message structs and server/client interfaces. Never edit these files by hand: they'll be overwritten on the next generation.

Implementing the Go server

The server implements the interface generated by protoc. Embedding pb.UnimplementedPriceFeedServer ensures forward compatibility: if new methods are added to the service without being implemented, calls return Unimplemented instead of failing at compile time.

type PriceFeedServer struct {
    pb.UnimplementedPriceFeedServer
    updates chan *pb.PriceUpdate
}

func (s *PriceFeedServer) Subscribe(req *pb.SubscribeRequest, stream pb.PriceFeed_SubscribeServer) error {
    pairs := make(map[string]bool)
    for _, p := range req.Pairs {
        pairs[p] = true
    }

    for {
        select {
        case update := <-s.updates:
            if !pairs[update.Pair] {
                continue
            }
            if err := stream.Send(update); err != nil {
                // Client disconnected — not a server error
                return nil
            }
        case <-stream.Context().Done():
            return nil  // Client cancelled the subscription
        }
    }
}

A few important points in this implementation:

  • The updates channel is fed by the goroutine collecting prices from exchanges (Binance, Kraken, etc.). The gRPC server only distributes.
  • stream.Send() returns an error if the client has disconnected. Returning nil here is intentional: it's not a server error, it's a normal disconnection.
  • stream.Context().Done() catches explicit client cancellations and context timeouts. Without this, the goroutine would keep running after the client disconnects.

To start the server:

func main() {
    updates := make(chan *pb.PriceUpdate, 100)

    srv := grpc.NewServer()
    pb.RegisterPriceFeedServer(srv, &PriceFeedServer{updates: updates})

    lis, err := net.Listen("tcp", ":50051")
    if err != nil {
        slog.Error("failed to listen", "error", err)
        os.Exit(1)
    }

    slog.Info("gRPC server listening", "addr", ":50051")
    if err := srv.Serve(lis); err != nil {
        slog.Error("serve error", "error", err)
        os.Exit(1)
    }
}

The Go client

The client is equally straightforward. The gRPC connection is reusable — create it once and inject it into the services that need it.

func connectPriceFeed(ctx context.Context, addr string) error {
    conn, err := grpc.NewClient(addr, grpc.WithTransportCredentials(insecure.NewCredentials()))
    if err != nil {
        return fmt.Errorf("dial: %w", err)
    }
    defer conn.Close()

    client := pb.NewPriceFeedClient(conn)
    stream, err := client.Subscribe(ctx, &pb.SubscribeRequest{
        Pairs: []string{"BTC/USDT", "ETH/USDT"},
    })
    if err != nil {
        return fmt.Errorf("subscribe: %w", err)
    }

    for {
        update, err := stream.Recv()
        if err == io.EOF {
            return nil  // Server closed the stream cleanly
        }
        if err != nil {
            return fmt.Errorf("recv: %w", err)
        }
        slog.Info("price update",
            "pair", update.Pair,
            "bid", update.Bid,
            "ask", update.Ask,
            "exchange", update.Exchange,
        )
    }
}

Note on insecure.NewCredentials(): this is for local development and intra-cluster communication (mTLS managed at the network level by the service mesh). In production over a public network, use credentials.NewClientTLSFromFile() or credentials.NewTLS().

The stream.Recv() loop is blocking. If the server isn't pushing updates, the client waits — without burning CPU. HTTP/2 handles that, no active polling. When the context is cancelled (client service shutdown, timeout, etc.), Recv() returns an error and the loop stops cleanly.

The 4 gRPC streaming modes

gRPC supports four communication patterns. We used server streaming above, but it's worth knowing the other three to pick the right tool for each case.

Unary — request / response

Classic REST behaviour. One call, one response. Ideal for point-in-time reads (GetLatest in our service).

rpc GetLatest(SubscribeRequest) returns (PriceUpdate);

Server streaming — subscribing to a feed

What we just implemented. The client sends one request, the server pushes as many responses as it wants. Perfect for feeds: prices, metrics, logs.

rpc Subscribe(SubscribeRequest) returns (stream PriceUpdate);

Client streaming — batch data upload

The client sends a stream of requests, the server replies once at the end. Use cases: bulk data ingestion, segmented file upload, order flow to aggregate before processing.

rpc BatchOrders(stream Order) returns (BatchResult);

Bidirectional streaming — streams in both directions

Client and server exchange streams simultaneously. Legitimate use cases: chat, collaborative editing, negotiation protocols. But be warned — this is the most complex mode to implement and debug. For most microservice needs, server streaming or unary is enough. Bidirectional is often over-engineered.

rpc MarketDataFeed(stream MarketQuery) returns (stream MarketEvent);

Conclusion

gRPC isn't here to replace REST. They're two tools with different use cases, and conflating them leads either to unnecessary complexity (gRPC-ifying a public API) or to broken polling (REST-ifying a real-time feed).

The right tool here is gRPC, when three conditions are met: internal Go services, need for data streaming, strict contracts between teams. On this concrete case, switching from REST polling to gRPC server streaming cut latency from ~800 ms to <50 ms, eliminated 180 req/min of useless network overhead, and delivered a versioned contract between the aggregator service and its consumers.

For APIs exposed to browsers or third-party clients: REST is still the answer. For service-to-service with real-time constraints: that's gRPC.

Comments (0)