When building a web service with PostgreSQL, a common decision is whether to use a client (a single, dedicated connection) or a pool (a manager of reusable connections). Choosing correctly impacts performance, scalability, and reliability. This post clarifies the difference, then shows focused snippets for two endpoints: a quick GET /users and a bulk POST /import-users using COPY.
What Is a PostgreSQL Client?
A client is one explicit database connection you open, use, and close. It shines when you need:
- Transactions around multi-step operations.
- Advanced features like
COPY FROM STDINfor bulk imports. - One-off scripts (migrations, data loaders, CLI tools).
What Is a PostgreSQL Pool?
A pool maintains a small set of live connections, ready to be borrowed and released per request. It’s ideal for:
- Web endpoints that run fast, frequent queries.
- Handling concurrency efficiently by reusing existing connections.
- Reducing overhead from repeated TCP/TLS handshakes and authentication.
Key Differences and Use Cases
- Concurrency
A single client handles one operation at a time; a pool enables concurrent operations by maintaining multiple reusable connections. - Performance
Pools reduce connection overhead (handshakes/auth) and improve throughput—especially under high traffic. - Resource Management
Pools cap and reuse active connections, protecting the database from overload while keeping latency predictable. - Transactions
Transactions are scoped to one connection. With a pool, explicitly acquire a client viapool.connect()to runBEGIN … COMMIT/ROLLBACKacross multiple queries on the same connection. - Prepared Statements
Prepared statements are cached per connection. Reusing a pooled client (especially when youpool.connect()and keep it for related work) boosts performance for repeated statements.
How this maps to our example:
We’ll use the pool for the lightweight, read-only GET /users endpoint (concurrency + low overhead). For the bulk POST /import-users import, we’ll acquire a dedicated client from the pool and wrap the COPY stream in a transaction to ensure atomicity.
Example Setup: Users Table (DDL)
CREATE TABLE users (
id SERIAL PRIMARY KEY,
first_name VARCHAR(100) NOT NULL,
last_name VARCHAR(100) NOT NULL,
address TEXT
);
Tip: For bulk imports, leave id out of the CSV (and the COPY column list) so PostgreSQL autogenerates it.
Example Endpoints (Focused Snippets)
GET /users — Use a Pool for fast, concurrent reads
// Create once at startup
import { Pool } from 'pg';
export const pool = new Pool({
connectionString: process.env.DATABASE_URL
?? 'postgresql://username:password@localhost:5432/database',
max: 10,
idleTimeoutMillis: 30_000,
});
// In your handler
const { rows } = await pool.query(
'SELECT id, first_name, last_name, address FROM users ORDER BY id DESC'
);
return rows;
POST /import-users — Use a dedicated connection + pg-copy-streams for transactional COPY
// expects CSV headers: first_name,last_name,address
import multer from 'multer';
import { Readable, pipeline } from 'stream';
import { promisify } from 'util';
import { from as copyFrom } from 'pg-copy-streams';
const upload = multer({ storage: multer.memoryStorage() });
const pipelineAsync = promisify(pipeline);
app.post('/import-users', upload.single('file') as any, async (req: any, res: Response) => {
if (!req.file) return res.status(400).json({ error: 'No file uploaded' });
const client = await pool.connect();
try {
await client.query('BEGIN');
const stream = Readable.from(req.file.buffer);
const copyStream = client.query(
copyFrom('COPY users (first_name, last_name, address) FROM STDIN WITH CSV HEADER')
);
await pipelineAsync(stream, copyStream);
await client.query('COMMIT');
res.json({ message: 'Users imported successfully' });
} catch (err) {
await client.query('ROLLBACK');
res.status(500).json({ error: 'Failed to process file' });
} finally {
client.release();
}
});
Why a dedicated connection? You get explicit control over BEGIN/COMMIT/ROLLBACK and a clean, atomic import using PostgreSQL’s high-performance COPY.
Behind the Scenes: Why the Pool Stays “Ready”
- Connection reuse: The pool keeps a small set of live connections. Borrow → query → release—no repeated handshakes.
- Backpressure, not overload: The max size limits concurrent DB work; extra requests queue until a connection frees up.
- Idle reaping: idleTimeoutMillis cleans up unused connections during low traffic.
- Self-healing: If a connection goes bad, the pool discards it and opens a new one when needed.
- Low-friction API: pool.query(…) bundles checkout → query → release in a single call.
Conclusion
- Use a
Poolfor everyday reads (like /users) to keep your service snappy and scalable. - Use a
Client(or a connection frompool.connect()) for transactional or bulk operations—especiallyCOPY FROM STDINimports.
In real projects, you’ll typically use both: pool for common requests and a dedicated connection for heavy, transactional work.
Reference
- https://github.com/nnsan/backend/tree/main/node-yarn/packages/postgres-pool-client