Supabase Realtime in Practice: WebSocket Connection Management and Reconnection Strategies
At 3 AM, my phone buzzed.
A message from a client: “Users are complaining that your chat app has delayed messages. Sometimes they have to refresh the page to see new ones.”
I stared at the screen, my stomach sinking. I knew this problem all too well—the WebSocket had dropped, but the frontend had no idea. Users kept typing, sending messages, thinking everything went through, when in reality, everything was lost in transit.
Honestly, when I first used Supabase Realtime, I fell into the same trap. I was building a collaborative whiteboard project and thought subscribing to database changes was just a few lines of code:
supabase.channel('board').on('postgres_changes', ...).subscribe()
Two days after launch, a colleague reported: “Our sync keeps freezing. Half-drawn lines suddenly disappear.”
Investigation revealed the WebSocket connection had silently dropped. No errors, no warnings—it just “died.” That’s when I realized: real-time subscriptions aren’t just about writing subscription code; connection management is the real challenge.
This article compiles all the pitfalls I’ve encountered and the solutions I’ve discovered. I’ll focus on WebSocket connection lifecycle management—the part I’ve found most tutorials gloss over. First, I’ll cover feature selection among the three core functions, then walk through implementing Postgres Changes subscriptions, and finally discuss production reconnection strategies and configuration optimization.
1. Supabase Realtime Features: Which One Should You Use?
When I first encountered Supabase Realtime, I was confused by three terms: Broadcast, Presence, and Postgres Changes. The docs said they’re three different real-time features, but which one should I use?
Here’s the key distinction: where the data lives.
| Feature | Data Storage | Typical Use Case | Latency |
|---|---|---|---|
| Broadcast | Memory only, not persisted | Client-to-client messaging, cursor sync | Lowest |
| Presence | In-memory key-value store (CRDT) | Online user list, collaborative state sync | Low |
| Postgres Changes | PostgreSQL database | Chat messages, order status updates | Medium |
The table might still feel abstract. Let me put it differently:
Broadcast is like a “megaphone.” You say something, everyone listening can hear it, but it’s gone afterward—no record. Perfect for “fleeting” data like cursor positions in collaborative editing. You move your mouse, others see your cursor move, but nobody cares where your cursor was 5 seconds ago.
Presence is like a “sign-in sheet.” Everyone signs in with their status (online, offline, editing…), and everyone can see the list. The key point: state syncs automatically, and it’s based on CRDT (Conflict-free Replicated Data Types), so you don’t worry about conflicts when two people modify the same data.
Postgres Changes is a “database listener.” When data in the database changes, you get notified. This is the “heaviest” option but also the most reliable—because data lives in PostgreSQL, even if you disconnect and reconnect, messages won’t be lost.
How to Choose? A Simple Decision Framework
Ask yourself two questions:
-
Does the data need to be persisted?
- Needs persistence → Postgres Changes
- Doesn’t need persistence → Ask the second question
-
Is the data an “event” or “state”?
- Event (something happened) → Broadcast
- State (someone is doing something) → Presence
For example, in a chat app: “sending a message” is an event, use Broadcast or Postgres Changes; “typing indicator” is a state, use Presence; “new message notification” needs persistence, use Postgres Changes.
In my collaborative whiteboard project, I allocated them this way:
- Brush stroke sync → Broadcast (fast, no persistence needed)
- Who’s online, who’s drawing where → Presence (state sync)
- Whiteboard content saving → Postgres Changes (persisted to database)
2. Postgres Changes in Action
Once you’ve decided on Postgres Changes, the first step is enabling publication.
Supabase doesn’t broadcast all table changes by default—that would be too resource-intensive. You need to explicitly tell it: “I want to listen to this table.”
-- Run in Supabase SQL Editor
ALTER PUBLICATION supabase_realtime ADD TABLE messages;
After running this command, INSERT, UPDATE, and DELETE operations on the messages table will be broadcast.
How to Write Subscription Code?
Here’s a complete example—real-time push for new chat room messages:
import { createClient } from '@supabase/supabase-js'
const supabase = createClient(
'https://your-project.supabase.co',
'your-anon-key'
)
// Create channel and subscribe
const channel = supabase
.channel('messages-channel') // Custom channel name
.on(
'postgres_changes',
{
event: 'INSERT', // Only listen for inserts
schema: 'public',
table: 'messages'
},
(payload) => {
console.log('New message received:', payload.new)
// payload.new is the newly inserted row data
appendMessage(payload.new)
}
)
.subscribe((status) => {
console.log('Subscription status:', status)
})
// Don't forget to clean up when component unmounts
// channel.unsubscribe()
This code looks simple, but there are several pitfalls:
Pitfall 1: event parameter options
event can be 'INSERT', 'UPDATE', 'DELETE', or '*' to listen for all events. If you only care about new messages, don’t use '*' to save unnecessary network traffic.
Pitfall 2: payload structure
payload isn’t the entire record—it’s an object:
payload.new: New data (valid for INSERT/UPDATE)payload.old: Old data (valid for UPDATE/DELETE, requires enabling replica identity)payload.eventType: Event typepayload.schema,payload.table: Source information
Pitfall 3: Row Level Security applies
This is something many people overlook: Realtime subscriptions also follow RLS rules.
If you’ve configured RLS, users only receive changes they “have permission to see.” For example, if the messages table restricts users to only see messages they’re involved in, Realtime will only push those messages—not all messages filtered on the frontend.
This is actually a major advantage of Supabase Realtime: security logic doesn’t need to be written twice.
Enabling Old Data Access (Replica Identity)
By default, payload.old for UPDATE and DELETE events is empty. If you need old data (like recording “who changed what to what”), enable replica identity:
ALTER TABLE messages REPLICA IDENTITY FULL;
However, this increases write overhead and WAL log size. Evaluate carefully in production whether you really need it.
3. WebSocket Connection Management Pitfalls
Back to the opening problem: WebSocket drops, frontend doesn’t know.
Supabase Realtime uses Phoenix Channels under the hood, and connection state changes trigger callbacks. But you have to actively listen, otherwise you won’t receive any messages.
Connection State Overview
The status parameter in the subscription callback has several values:
| Status | Meaning | What You Should Do |
|---|---|---|
SUBSCRIBED | Successfully subscribed | Working normally, receiving messages |
CHANNEL_ERROR | Connection error | Log the error, attempt reconnection |
TIMED_OUT | Timeout (no response) | Possible network fluctuation, trigger reconnection |
CLOSED | Connection closed | User disconnected or server closed connection |
This looks straightforward, but there’s a catch: state transitions can be too fast to handle.
For example, during network jitter, you might instantly experience CHANNEL_ERROR → CLOSED → SUBSCRIBED (automatic reconnection succeeds), and you might not even notice there was a problem.
I later added a global state monitor that logs every state change:
const channel = supabase
.channel('messages-channel')
.on('postgres_changes', { ... }, handler)
.subscribe((status, err) => {
logConnectionStatus(status, err) // Log status and timestamp
if (status === 'CHANNEL_ERROR' || status === 'TIMED_OUT') {
showReconnectingToast() // Show a notification to user
}
if (status === 'SUBSCRIBED') {
hideReconnectingToast()
syncMissedMessages() // Sync messages missed during disconnection
}
})
Heartbeat Detection: How Does It Know the Connection Is Alive?
Supabase Realtime has an internal heartbeat mechanism (source in keep_alive.ex). The server periodically sends a heartbeat packet, and the client responds with an acknowledgment.
If the client fails to respond several times in a row, the server considers the connection dead and actively disconnects. Conversely, if the client doesn’t receive a heartbeat for a while, it also triggers a timeout reconnection.
But you don’t need to handle heartbeats manually—the Supabase SDK does it automatically. What you really need to care about is the reconnection strategy after timeout.
Disconnection Reconnection: Exponential Backoff vs. Immediate Retry
Supabase’s default automatic reconnection uses exponential backoff: first retry waits 1 second, second waits 2 seconds, third waits 4 seconds… up to about 30 seconds.
The benefit: if the server is temporarily overloaded, it won’t be overwhelmed by massive reconnection requests. The downside: users might wait a long time to recover.
For collaborative applications (whiteboards, document editing), I use a more aggressive reconnection strategy:
// Manual reconnection, not relying on default exponential backoff
let reconnectAttempts = 0
const MAX_RECONNECT = 10
function handleDisconnect() {
if (reconnectAttempts >= MAX_RECONNECT) {
showFatalError('Unable to restore connection, please refresh the page')
return
}
// Quick retries for first few attempts, then gradually slow down
const delay = reconnectAttempts < 3 ? 1000 : 3000
setTimeout(() => {
reconnectAttempts++
channel.subscribe() // Try subscribing again
}, delay)
}
After Reconnection, What About Messages During Disconnection?
This is the most headache-inducing problem: disconnected for 30 seconds, 10 messages came through during that time—how do you recover them?
Option 1: Frontend requests API to catch up
After successful reconnection, immediately call an API to fetch all messages “after the last successful message ID”:
// Remember the last received message ID
let lastMessageId = null
function syncMissedMessages() {
supabase
.from('messages')
.select('*')
.gt('id', lastMessageId)
.order('created_at', { ascending: true })
.then(({ data }) => {
// Append missed messages to the list
appendMessages(data)
lastMessageId = data[data.length - 1]?.id
})
}
Option 2: Server pushes “changes during disconnection”
This requires backend cooperation—storing “unpushed changes” in the database, then batch pushing when the client reconnects. More complex, but more reliable.
For small projects, Option 1 is sufficient. The key is: sync immediately after successful reconnection, don’t wait for the user to manually refresh.
4. Broadcast and Presence: Beyond Chat Rooms
The previous chapters focused on Postgres Changes. This chapter covers the other two features—Broadcast and Presence.
Broadcast: Collaborative Editor Cursor Sync
When multiple people collaboratively edit a document, seeing where others’ cursors are improves the experience significantly. Broadcast is perfect for this:
// Send your cursor position
const broadcastChannel = supabase.channel('editor-cursors')
// Listen for others' cursors
broadcastChannel
.on('broadcast', { event: 'cursor-move' }, (payload) => {
updateRemoteCursor(payload.userId, payload.x, payload.y)
})
.subscribe()
// Broadcast when you move
document.addEventListener('mousemove', (e) => {
broadcastChannel.send({
type: 'broadcast',
event: 'cursor-move',
payload: {
userId: currentUser.id,
x: e.clientX,
y: e.clientY
}
})
})
Key points:
broadcastChannel.send()is active sending, not a post-subscription callback- Channel names are customizable; different editors use different channels for isolation
- Cursor position data doesn’t need persistence; Broadcast’s “fire-and-forget” nature is perfect
Presence: Who’s Online at a Glance
Presence is great for displaying “state-type” information. Like an online users list:
const presenceChannel = supabase.channel('online-users', {
config: {
presence: {
key: 'user_id' // Used to identify unique users
}
}
})
presenceChannel
.on('presence', { event: 'sync' }, () => {
const state = presenceChannel.presenceState()
// state is an object, key is user_id, value is state array
renderOnlineUsers(Object.keys(state))
})
.on('presence', { event: 'join' }, ({ newPresences }) => {
// New user joined
showToast(`${newPresences[0].user_name} joined`)
})
.on('presence', { event: 'leave' }, ({ leftPresences }) => {
// User left
showToast(`${leftPresences[0].user_name} left`)
})
.subscribe()
// Register your status when online
presenceChannel.track({
user_id: currentUser.id,
user_name: currentUser.name,
online_at: new Date().toISOString()
})
The track() method tells the channel “I’m here.” State automatically syncs to all subscribers, and it’s CRDT-based, so no worries about conflicts.
Private Channels: Limiting Who Can Subscribe
By default, anyone with an anon key can subscribe to public channels. But some scenarios require access restriction—like a team’s private collaborative space.
Supabase supports controlling channel access through RLS Policy:
-- Create Policy in realtime Schema
CREATE POLICY "Only team members can join private channel"
ON realtime.channels
FOR ALL
USING (
-- Check if user belongs to the team
EXISTS (
SELECT 1 FROM team_members
WHERE team_id = channel.team_id
AND user_id = auth.uid()
)
);
This way, only team members can subscribe to private-team-xxx channels; others will be rejected.
5. Production Environment: Configuration Parameters You Must Know
Everything works fine locally, but problems pile up after deployment. The reason is often configuration.
Key Realtime Server Parameters
Supabase Realtime’s default configuration works for most projects, but high-concurrency scenarios need tuning:
| Parameter | Default | Recommendation | Purpose |
|---|---|---|---|
DB_POOL_SIZE | 10 | Adjust based on concurrent connections | PostgreSQL connection pool size |
DB_QUEUE_TARGET | 100ms | Lower to reduce latency, but increases CPU | Wait time for batch message pushing |
SUBSCRIBER_LIMIT | 200 | Adjust based on user count | Max subscribers per channel |
If you notice message latency increasing significantly, you can lower DB_QUEUE_TARGET (e.g., 50ms). The tradeoff is the server checks for changes more frequently, increasing CPU usage.
Connection Limits in Multi-tenant Architecture
A common pitfall: in multi-tenant systems, one channel per tenant quickly leads to an explosion in total channel count.
Supabase Realtime has limits on total subscriptions per project (Pro plan is 5000 concurrent subscriptions). If your system has 1000 tenants with an average of 5 users online per tenant, you’re right at the boundary.
Solutions:
- Merge channels: Don’t need a separate channel per tenant; use
filterto separate within one channel - Selective subscription: Users only subscribe to their current tenant’s channel, not all of them
// Use filter to receive only messages belonging to the current tenant
supabase
.channel('tenant-messages')
.on(
'postgres_changes',
{
event: 'INSERT',
schema: 'public',
table: 'messages',
filter: 'tenant_id=eq.123' // Only receive tenant 123's messages
},
handler
)
.subscribe()
Comparison with Alternatives: Supabase vs Pusher vs Firebase
Finally, a quick comparison of mainstream real-time solutions:
| Solution | Cost | Feature Richness | Learning Curve |
|---|---|---|---|
| Supabase Realtime | Free (Pro $25/month) | High (three-in-one + database binding) | Medium |
| Pusher | From $29 | Medium (pure WebSocket) | Low |
| Firebase Realtime DB | Pay-per-use | Medium (Firebase ecosystem binding) | Low |
Supabase’s advantages: Postgres Changes directly listens to database changes without extra push logic; RLS applies automatically, unified security logic. Disadvantages: requires understanding PostgreSQL mechanisms, slightly steeper learning curve.
If you’re already using Supabase for Auth and Storage, adding Realtime is seamless. If you just need simple WebSocket, Pusher might be faster to get started with.
Summary
After all this, there are three key takeaways:
Choose the right feature: Broadcast for events, Presence for state sync, Postgres Changes for data persistence. Ask yourself two questions—does the data need persistence, is it an event or state—and the answer becomes clear.
Manage connections well: Successful subscription doesn’t mean you’ll always receive messages. Actively monitor state changes, show users a “reconnecting” notification, sync missed data immediately after reconnection. Do these well, and the real-time experience becomes stable.
Configure properly: Production isn’t a scaled-up version of local development. Parameters like DB_POOL_SIZE and QUEUE_TARGET directly affect latency and throughput. At least check the defaults before going live.
That pitfall I mentioned at the beginning—WebSocket dropping without knowing—I later solved it with state monitoring + reconnection notifications. User experience improved immediately: when disconnected, they see “restoring connection” instead of waiting blindly; after reconnection, messages sync automatically without manual refresh.
If you haven’t used Supabase Realtime yet, I recommend starting with Postgres Changes—simplest and most common use case. Combined with the Auth series I wrote earlier (email verification, OAuth configuration), you can build a complete real-time backend.
Feel free to leave questions in the comments, or check the official Supabase docs. The architecture documentation is well-written; for those wanting to dive deeper into Phoenix Channels and the PG2 adapter, the source code is worth reading.
FAQ
What are the differences between the three Supabase Realtime features?
How to recover after WebSocket disconnection?
- Quick retries for first few attempts (1 second)
- Gradually slow down afterward (3 seconds)
- Sync missed messages immediately after successful reconnection
Do Realtime subscriptions follow RLS rules?
What configuration parameters should I focus on in production?
- DB_POOL_SIZE: PostgreSQL connection pool size, default 10
- DB_QUEUE_TARGET: Batch push wait time, default 100ms
- SUBSCRIBER_LIMIT: Max subscribers per channel, default 200
How to avoid channel explosion in multi-tenant systems?
How does Supabase Realtime compare to Pusher/Firebase?
11 min read · Published on: Apr 26, 2026 · Modified on: Apr 29, 2026
Supabase in Practice
If you landed here from search, the fastest way to build context is to jump to the previous or next post in this same series.
Previous
Supabase Realtime in Practice: Comparing Three Modes and Building Collaborative Applications
Supabase Realtime offers three real-time modes: Postgres Changes, Presence, and Broadcast. This article compares each mode's characteristics and provides complete collaborative application code examples with RLS security configurations.
Part 8 of 9
Next
This is the latest post in the series so far.
Related Posts
Supabase Getting Started: PostgreSQL + Auth + Storage All-in-One Backend
Supabase Getting Started: PostgreSQL + Auth + Storage All-in-One Backend
Supabase Database Design: Tables, Relationships & Row Level Security Guide
Supabase Database Design: Tables, Relationships & Row Level Security Guide
Supabase Auth in Practice: Email Verification, OAuth & Session Management


Comments
Sign in with GitHub to leave a comment