PostgreSQL Fundamentals for Go Developers

PostgreSQL is one of the most powerful open-source relational databases, and it pairs exceptionally well with Go for backend development. This post covers PostgreSQL fundamentals and demonstrates how to connect and interact with Postgres from Go using the pgx driver - the most performant PostgreSQL driver for Go.

Why PostgreSQL?

PostgreSQL offers features that make it ideal for production applications:

flowchart TD
    subgraph Features["PostgreSQL Strengths"]
        A[ACID Compliance] --> R[Reliability]
        J[JSON Support] --> F[Flexibility]
        E[Extensibility] --> C[Custom Types]
        P[Performance] --> S[Scalability]
    end

    subgraph UseCases["Use Cases"]
        R --> W[Web Applications]
        F --> API[APIs]
        C --> D[Data Analytics]
        S --> M[Microservices]
    end

    style Features fill:#e3f2fd
    style UseCases fill:#e8f5e9

Key advantages:

  • ACID compliance: Guaranteed data integrity
  • JSON/JSONB support: Flexible document storage
  • Advanced indexing: B-tree, Hash, GiST, GIN
  • Full-text search: Built-in text search capabilities
  • Extensibility: Custom functions, types, and operators

Setting Up PostgreSQL

Installation

macOS (Homebrew):

1
2
brew install postgresql@15
brew services start postgresql@15

Ubuntu/Debian:

1
2
3
sudo apt update
sudo apt install postgresql-15
sudo systemctl start postgresql

Basic Configuration

Connect to PostgreSQL and create a database:

1
2
3
4
5
6
7
8
9
10
11
-- Connect as postgres user
sudo -u postgres psql

-- Create a new database
CREATE DATABASE myapp;

-- Create a user with password
CREATE USER appuser WITH PASSWORD 'securepassword';

-- Grant privileges
GRANT ALL PRIVILEGES ON DATABASE myapp TO appuser;

PostgreSQL Data Types

Understanding data types is crucial for schema design:

Category Types Use Case
Numeric INTEGER, BIGINT, DECIMAL, NUMERIC IDs, quantities, money
Text VARCHAR(n), TEXT, CHAR(n) Names, descriptions
Date/Time TIMESTAMP, DATE, TIME, INTERVAL Events, scheduling
Boolean BOOLEAN Flags, states
JSON JSON, JSONB Flexible documents
Arrays INTEGER[], TEXT[] Lists, tags
UUID UUID Unique identifiers

Example Schema

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
profile JSONB DEFAULT '{}',
tags TEXT[] DEFAULT '{}',
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

CREATE TABLE products (
id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
price DECIMAL(10, 2) NOT NULL,
stock_quantity INTEGER DEFAULT 0 CHECK (stock_quantity >= 0),
category_id INTEGER REFERENCES categories(id),
metadata JSONB DEFAULT '{}',
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

Constraints and Indexes

Constraints

Constraints ensure data integrity:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
-- Primary Key
CREATE TABLE orders (
id SERIAL PRIMARY KEY
);

-- Foreign Key
CREATE TABLE order_items (
id SERIAL PRIMARY KEY,
order_id INTEGER REFERENCES orders(id) ON DELETE CASCADE,
product_id INTEGER REFERENCES products(id)
);

-- Unique Constraint
ALTER TABLE users ADD CONSTRAINT unique_email UNIQUE (email);

-- Check Constraint
ALTER TABLE products ADD CONSTRAINT positive_price CHECK (price > 0);

-- Not Null
ALTER TABLE users ALTER COLUMN username SET NOT NULL;

Indexes

Indexes improve query performance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
-- B-tree index (default) - good for equality and range queries
CREATE INDEX idx_users_email ON users(email);

-- Composite index
CREATE INDEX idx_products_category_price ON products(category_id, price);

-- Partial index - index only active users
CREATE INDEX idx_active_users ON users(username) WHERE is_active = true;

-- GIN index for JSONB
CREATE INDEX idx_products_metadata ON products USING GIN (metadata);

-- Index for array columns
CREATE INDEX idx_users_tags ON users USING GIN (tags);
flowchart LR
    subgraph Without["Without Index"]
        Q1[Query] --> FS[Full Table Scan]
        FS --> R1["O#40;n#41; - Slow"]
    end

    subgraph With["With Index"]
        Q2[Query] --> IS[Index Lookup]
        IS --> R2["O#40;log n#41; - Fast"]
    end

    style Without fill:#ffcdd2
    style With fill:#c8e6c9

Connecting Go to PostgreSQL with pgx

Why pgx?

pgx is the recommended PostgreSQL driver for Go:

  • Native PostgreSQL protocol implementation
  • Better performance than database/sql
  • Full PostgreSQL feature support
  • Connection pooling built-in

Installation

1
2
go get github.com/jackc/pgx/v5
go get github.com/jackc/pgx/v5/pgxpool

Basic Connection

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
package main

import (
"context"
"fmt"
"log"
"os"

"github.com/jackc/pgx/v5/pgxpool"
)

func main() {
// Connection string
connStr := "postgres://appuser:securepassword@localhost:5432/myapp"

// Create connection pool
pool, err := pgxpool.New(context.Background(), connStr)
if err != nil {
log.Fatalf("Unable to connect to database: %v\n", err)
}
defer pool.Close()

// Test connection
err = pool.Ping(context.Background())
if err != nil {
log.Fatalf("Unable to ping database: %v\n", err)
}

fmt.Println("Successfully connected to PostgreSQL!")
}

Environment-based Configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
package config

import (
"context"
"fmt"
"os"
"time"

"github.com/jackc/pgx/v5/pgxpool"
)

type DBConfig struct {
Host string
Port string
User string
Password string
Database string
MaxConns int32
MinConns int32
MaxConnLife time.Duration
}

func NewDBConfig() *DBConfig {
return &DBConfig{
Host: getEnv("DB_HOST", "localhost"),
Port: getEnv("DB_PORT", "5432"),
User: getEnv("DB_USER", "appuser"),
Password: getEnv("DB_PASSWORD", ""),
Database: getEnv("DB_NAME", "myapp"),
MaxConns: 25,
MinConns: 5,
MaxConnLife: time.Hour,
}
}

func (c *DBConfig) ConnectionString() string {
return fmt.Sprintf(
"postgres://%s:%s@%s:%s/%s",
c.User, c.Password, c.Host, c.Port, c.Database,
)
}

func ConnectDB(cfg *DBConfig) (*pgxpool.Pool, error) {
poolConfig, err := pgxpool.ParseConfig(cfg.ConnectionString())
if err != nil {
return nil, err
}

poolConfig.MaxConns = cfg.MaxConns
poolConfig.MinConns = cfg.MinConns
poolConfig.MaxConnLifetime = cfg.MaxConnLife

return pgxpool.NewWithConfig(context.Background(), poolConfig)
}

func getEnv(key, defaultValue string) string {
if value := os.Getenv(key); value != "" {
return value
}
return defaultValue
}

CRUD Operations with pgx

Create (Insert)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func CreateUser(ctx context.Context, pool *pgxpool.Pool, user *User) error {
query := `
INSERT INTO users (username, email, password_hash, profile)
VALUES ($1, $2, $3, $4)
RETURNING id, created_at
`

err := pool.QueryRow(ctx, query,
user.Username,
user.Email,
user.PasswordHash,
user.Profile,
).Scan(&user.ID, &user.CreatedAt)

return err
}

Read (Select)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
type User struct {
ID int `json:"id"`
Username string `json:"username"`
Email string `json:"email"`
PasswordHash string `json:"-"`
Profile []byte `json:"profile"`
IsActive bool `json:"is_active"`
CreatedAt time.Time `json:"created_at"`
}

// Get single user
func GetUserByID(ctx context.Context, pool *pgxpool.Pool, id int) (*User, error) {
query := `
SELECT id, username, email, password_hash, profile, is_active, created_at
FROM users
WHERE id = $1
`

user := &User{}
err := pool.QueryRow(ctx, query, id).Scan(
&user.ID,
&user.Username,
&user.Email,
&user.PasswordHash,
&user.Profile,
&user.IsActive,
&user.CreatedAt,
)

if err != nil {
return nil, err
}
return user, nil
}

// Get multiple users
func GetActiveUsers(ctx context.Context, pool *pgxpool.Pool) ([]User, error) {
query := `
SELECT id, username, email, profile, is_active, created_at
FROM users
WHERE is_active = true
ORDER BY created_at DESC
`

rows, err := pool.Query(ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()

var users []User
for rows.Next() {
var u User
err := rows.Scan(
&u.ID,
&u.Username,
&u.Email,
&u.Profile,
&u.IsActive,
&u.CreatedAt,
)
if err != nil {
return nil, err
}
users = append(users, u)
}

return users, rows.Err()
}

Update

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
func UpdateUser(ctx context.Context, pool *pgxpool.Pool, id int, username, email string) error {
query := `
UPDATE users
SET username = $2, email = $3, updated_at = NOW()
WHERE id = $1
`

result, err := pool.Exec(ctx, query, id, username, email)
if err != nil {
return err
}

if result.RowsAffected() == 0 {
return fmt.Errorf("user with id %d not found", id)
}

return nil
}

Delete

1
2
3
4
5
6
7
8
9
10
11
12
13
14
func DeleteUser(ctx context.Context, pool *pgxpool.Pool, id int) error {
query := `DELETE FROM users WHERE id = $1`

result, err := pool.Exec(ctx, query, id)
if err != nil {
return err
}

if result.RowsAffected() == 0 {
return fmt.Errorf("user with id %d not found", id)
}

return nil
}

Working with Transactions

Transactions ensure atomic operations:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
func TransferBalance(ctx context.Context, pool *pgxpool.Pool, fromID, toID int, amount float64) error {
tx, err := pool.Begin(ctx)
if err != nil {
return err
}
// Defer rollback - no-op if committed
defer tx.Rollback(ctx)

// Deduct from sender
_, err = tx.Exec(ctx,
`UPDATE accounts SET balance = balance - $1 WHERE id = $2 AND balance >= $1`,
amount, fromID,
)
if err != nil {
return err
}

// Add to receiver
_, err = tx.Exec(ctx,
`UPDATE accounts SET balance = balance + $1 WHERE id = $2`,
amount, toID,
)
if err != nil {
return err
}

// Commit transaction
return tx.Commit(ctx)
}
sequenceDiagram
    participant App as Go Application
    participant DB as PostgreSQL

    App->>DB: BEGIN TRANSACTION
    App->>DB: UPDATE accounts (deduct)
    alt Success
        App->>DB: UPDATE accounts (add)
        alt Success
            App->>DB: COMMIT
            DB->>App: Success
        else Failure
            App->>DB: ROLLBACK
            DB->>App: Rolled back
        end
    else Failure
        App->>DB: ROLLBACK
        DB->>App: Rolled back
    end

Query Optimization

Using EXPLAIN ANALYZE

1
2
3
4
5
EXPLAIN ANALYZE
SELECT * FROM users
WHERE email = '[email protected]';

-- Output shows execution plan and timing

Batch Operations

For bulk inserts, use CopyFrom:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func BulkInsertUsers(ctx context.Context, pool *pgxpool.Pool, users []User) error {
columns := []string{"username", "email", "password_hash"}

rows := make([][]interface{}, len(users))
for i, u := range users {
rows[i] = []interface{}{u.Username, u.Email, u.PasswordHash}
}

_, err := pool.CopyFrom(
ctx,
pgx.Identifier{"users"},
columns,
pgx.CopyFromRows(rows),
)
return err
}

Prepared Statements

For frequently executed queries:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
func PrepareStatements(ctx context.Context, pool *pgxpool.Pool) error {
// pgx automatically prepares and caches statements
// This happens transparently when using QueryRow/Query/Exec

// For manual preparation (rarely needed):
conn, err := pool.Acquire(ctx)
if err != nil {
return err
}
defer conn.Release()

_, err = conn.Conn().Prepare(ctx, "get_user",
"SELECT * FROM users WHERE id = $1")
return err
}

Best Practices

Connection Pool Sizing

flowchart LR
    subgraph Pool["Connection Pool"]
        C1[Conn 1]
        C2[Conn 2]
        C3[Conn 3]
        Cn[Conn N]
    end

    R1[Request 1] --> Pool
    R2[Request 2] --> Pool
    R3[Request 3] --> Pool

    Pool --> DB[(PostgreSQL)]

    style Pool fill:#e3f2fd

Rule of thumb:

  • MaxConns: Number of CPU cores × 2 + 1
  • MinConns: 2-5 for most applications
  • Monitor connection usage and adjust

Error Handling

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import (
"errors"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgconn"
)

func handleDBError(err error) error {
if err == nil {
return nil
}

// No rows found
if errors.Is(err, pgx.ErrNoRows) {
return ErrNotFound
}

// PostgreSQL specific errors
var pgErr *pgconn.PgError
if errors.As(err, &pgErr) {
switch pgErr.Code {
case "23505": // unique_violation
return ErrDuplicateEntry
case "23503": // foreign_key_violation
return ErrForeignKeyViolation
case "23514": // check_violation
return ErrCheckViolation
}
}

return err
}

Summary

PostgreSQL combined with Go’s pgx driver provides a powerful foundation for backend development:

Concept Key Points
Data Types Use appropriate types for data integrity
Constraints Enforce business rules at database level
Indexes Critical for query performance
Connection Pools Manage connections efficiently
Transactions Ensure atomic operations
Error Handling Handle PostgreSQL-specific errors

Next post: Mastering GORM: Go’s Powerful ORM - Learn how GORM simplifies database operations with Go structs.

Why Go for Backend Development Mastering GORM: Go's Powerful ORM

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×