Middleware and concurrency are two powerful features that make Go excellent for backend development. Middleware enables cross-cutting concerns like logging and authentication, while Go’s goroutines and channels provide elegant solutions for concurrent processing. This post explores both patterns in depth.
Understanding Middleware Middleware intercepts requests before they reach handlers, enabling common functionality across all routes.
flowchart LR
R[Request] --> M1[Logger]
M1 --> M2[Auth]
M2 --> M3[CORS]
M3 --> H[Handler]
H --> M3
M3 --> M2
M2 --> M1
M1 --> Res[Response]
style M1 fill:#e3f2fd
style M2 fill:#fff3e0
style M3 fill:#e8f5e9
Basic Middleware Structure 1 2 3 4 5 6 7 8 9 10 11 12 13 func MyMiddleware () gin.HandlerFunc { return func (c *gin.Context) { start := time.Now() c.Next() duration := time.Since(start) log.Printf("Request took %v" , duration) } }
Common Middleware Patterns Logging Middleware 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 func LoggerMiddleware () gin.HandlerFunc { return func (c *gin.Context) { start := time.Now() path := c.Request.URL.Path c.Next() latency := time.Since(start) status := c.Writer.Status() clientIP := c.ClientIP() method := c.Request.Method log.Printf("[%s] %s %s %d %v" , method, path, clientIP, status, latency) } }
Request ID Middleware 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 func RequestIDMiddleware () gin.HandlerFunc { return func (c *gin.Context) { requestID := c.GetHeader("X-Request-ID" ) if requestID == "" { requestID = uuid.New().String() } c.Set("request_id" , requestID) c.Header("X-Request-ID" , requestID) c.Next() } } func handler (c *gin.Context) { requestID := c.GetString("request_id" ) log.Printf("[%s] Processing request" , requestID) }
Recovery Middleware 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 func RecoveryMiddleware () gin.HandlerFunc { return func (c *gin.Context) { defer func () { if err := recover (); err != nil { log.Printf("Panic recovered: %v\n%s" , err, debug.Stack()) c.JSON(http.StatusInternalServerError, gin.H{ "error" : "Internal server error" , }) c.Abort() } }() c.Next() } }
CORS Middleware 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 func CORSMiddleware () gin.HandlerFunc { return func (c *gin.Context) { c.Header("Access-Control-Allow-Origin" , "*" ) c.Header("Access-Control-Allow-Methods" , "GET, POST, PUT, PATCH, DELETE, OPTIONS" ) c.Header("Access-Control-Allow-Headers" , "Origin, Content-Type, Authorization" ) c.Header("Access-Control-Max-Age" , "86400" ) if c.Request.Method == "OPTIONS" { c.AbortWithStatus(http.StatusNoContent) return } c.Next() } }
Timeout Middleware 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 func TimeoutMiddleware (timeout time.Duration) gin.HandlerFunc { return func (c *gin.Context) { ctx, cancel := context.WithTimeout(c.Request.Context(), timeout) defer cancel() c.Request = c.Request.WithContext(ctx) finished := make (chan struct {}) go func () { c.Next() close (finished) }() select { case <-finished: case <-ctx.Done(): c.JSON(http.StatusGatewayTimeout, gin.H{ "error" : "Request timeout" , }) c.Abort() } } }
Applying Middleware 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 func main () { router := gin.New() router.Use(RecoveryMiddleware()) router.Use(LoggerMiddleware()) router.Use(CORSMiddleware()) api := router.Group("/api" ) api.Use(RequestIDMiddleware()) { api.GET("/public" , publicHandler) protected := api.Group("/" ) protected.Use(AuthMiddleware()) { protected.GET("/users" , listUsers) protected.POST("/users" , createUser) } } router.Run(":8080" ) }
Concurrency in Go Goroutines Goroutines are lightweight threads managed by Go runtime:
1 2 3 4 5 6 7 8 9 10 11 12 13 func main () { go processTask("task1" ) for i := 0 ; i < 10 ; i++ { go func (id int ) { fmt.Printf("Processing task %d\n" , id) }(i) } time.Sleep(time.Second) }
Channels Channels enable safe communication between goroutines:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 func main () { ch := make (chan string ) go func () { ch <- "Hello from goroutine" }() message := <-ch fmt.Println(message) buffered := make (chan int , 3 ) buffered <- 1 buffered <- 2 buffered <- 3 fmt.Println(<-buffered) }
flowchart LR
subgraph Goroutine1["Goroutine 1"]
G1[Producer]
end
subgraph Channel["Channel"]
C[Buffer]
end
subgraph Goroutine2["Goroutine 2"]
G2[Consumer]
end
G1 -->|Send| C
C -->|Receive| G2
style Channel fill:#e3f2fd
Select Statement Handle multiple channels:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 func processWithTimeout (data string ) (string , error ) { resultCh := make (chan string ) errCh := make (chan error ) go func () { time.Sleep(100 * time.Millisecond) resultCh <- "Processed: " + data }() select { case result := <-resultCh: return result, nil case err := <-errCh: return "" , err case <-time.After(500 * time.Millisecond): return "" , errors.New("timeout" ) } }
Concurrency Patterns for APIs Background Processing 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 func (h *Handler) CreateOrder(c *gin.Context) { var order Order if err := c.ShouldBindJSON(&order); err != nil { c.JSON(http.StatusBadRequest, gin.H{"error" : err.Error()}) return } if err := h.repo.Create(&order); err != nil { c.JSON(http.StatusInternalServerError, gin.H{"error" : "Failed to create order" }) return } go func () { h.emailService.SendOrderConfirmation(order) h.inventoryService.DeductStock(order.Items) h.notificationService.NotifyWarehouse(order) }() c.JSON(http.StatusCreated, gin.H{ "message" : "Order created" , "order" : order, }) }
Parallel Data Fetching 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 type DashboardData struct { User *User Orders []Order Stats *Statistics Notifications []Notification } func (h *Handler) GetDashboard(c *gin.Context) { userID := c.GetUint("user_id" ) ctx := c.Request.Context() var ( user *User orders []Order stats *Statistics notifications []Notification wg sync.WaitGroup errCh = make (chan error , 4 ) ) wg.Add(4 ) go func () { defer wg.Done() var err error user, err = h.userRepo.FindByID(ctx, userID) if err != nil { errCh <- err } }() go func () { defer wg.Done() var err error orders, err = h.orderRepo.FindByUserID(ctx, userID) if err != nil { errCh <- err } }() go func () { defer wg.Done() var err error stats, err = h.statsService.GetUserStats(ctx, userID) if err != nil { errCh <- err } }() go func () { defer wg.Done() var err error notifications, err = h.notificationRepo.FindUnread(ctx, userID) if err != nil { errCh <- err } }() wg.Wait() close (errCh) for err := range errCh { if err != nil { c.JSON(http.StatusInternalServerError, gin.H{"error" : "Failed to load dashboard" }) return } } c.JSON(http.StatusOK, DashboardData{ User: user, Orders: orders, Stats: stats, Notifications: notifications, }) }
Worker Pool Pattern 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 type Job struct { ID int Data string } type Result struct { JobID int Output string Error error } func worker (id int , jobs <-chan Job, results chan <- Result) { for job := range jobs { output, err := processJob(job) results <- Result{ JobID: job.ID, Output: output, Error: err, } } } func ProcessBatch (items []string , workerCount int ) []Result { jobs := make (chan Job, len (items)) results := make (chan Result, len (items)) for w := 1 ; w <= workerCount; w++ { go worker(w, jobs, results) } for i, item := range items { jobs <- Job{ID: i, Data: item} } close (jobs) var output []Result for i := 0 ; i < len (items); i++ { output = append (output, <-results) } return output }
flowchart TD
subgraph Input["Job Queue"]
J1[Job 1]
J2[Job 2]
J3[Job 3]
J4[Job 4]
end
subgraph Workers["Worker Pool"]
W1[Worker 1]
W2[Worker 2]
W3[Worker 3]
end
subgraph Output["Results"]
R1[Result 1]
R2[Result 2]
R3[Result 3]
R4[Result 4]
end
Input --> Workers
Workers --> Output
style Workers fill:#e3f2fd
Rate Limiting with Channels 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 type RateLimiter struct { ticker *time.Ticker tokens chan struct {} } func NewRateLimiter (rate int , burst int ) *RateLimiter { rl := &RateLimiter{ ticker: time.NewTicker(time.Second / time.Duration(rate)), tokens: make (chan struct {}, burst), } for i := 0 ; i < burst; i++ { rl.tokens <- struct {}{} } go func () { for range rl.ticker.C { select { case rl.tokens <- struct {}{}: default : } } }() return rl } func (rl *RateLimiter) Wait() { <-rl.tokens } func RateLimitMiddleware (limiter *RateLimiter) gin.HandlerFunc { return func (c *gin.Context) { limiter.Wait() c.Next() } }
Context Propagation Passing Context Through Middleware 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 func ContextMiddleware () gin.HandlerFunc { return func (c *gin.Context) { ctx := context.WithValue(c.Request.Context(), "request_time" , time.Now()) c.Request = c.Request.WithContext(ctx) c.Next() } } func handler (c *gin.Context) { ctx := c.Request.Context() result, err := service.Process(ctx, data) db.WithContext(ctx).Find(&users) }
Handling Cancellation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 func (h *Handler) LongRunningTask(c *gin.Context) { ctx := c.Request.Context() resultCh := make (chan string ) go func () { result := h.service.ProcessLongTask(ctx) resultCh <- result }() select { case result := <-resultCh: c.JSON(http.StatusOK, gin.H{"result" : result}) case <-ctx.Done(): log.Println("Request cancelled:" , ctx.Err()) return } }
Error Handling in Concurrent Code 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 type SafeResult struct { mu sync.Mutex errors []error } func (sr *SafeResult) AddError(err error ) { sr.mu.Lock() defer sr.mu.Unlock() sr.errors = append (sr.errors, err) } func (sr *SafeResult) HasErrors() bool { sr.mu.Lock() defer sr.mu.Unlock() return len (sr.errors) > 0 } import "golang.org/x/sync/errgroup" func FetchAll (ctx context.Context) (*Data, error ) { g, ctx := errgroup.WithContext(ctx) var users []User var orders []Order g.Go(func () error { var err error users, err = fetchUsers(ctx) return err }) g.Go(func () error { var err error orders, err = fetchOrders(ctx) return err }) if err := g.Wait(); err != nil { return nil , err } return &Data{Users: users, Orders: orders}, nil }
Summary
Pattern
Use Case
Middleware
Cross-cutting concerns (logging, auth, CORS)
Goroutines
Lightweight concurrent execution
Channels
Safe communication between goroutines
Select
Multi-channel operations, timeouts
Worker Pool
Controlled parallelism for batch processing
Context
Cancellation, deadlines, request-scoped values
flowchart TD
subgraph Middleware["Middleware Chain"]
M1[Logger] --> M2[Recovery]
M2 --> M3[Auth]
M3 --> M4[RateLimit]
end
subgraph Concurrency["Concurrent Processing"]
G1[Goroutine] --> C[Channel]
G2[Goroutine] --> C
C --> H[Handler]
end
M4 --> Concurrency
style Middleware fill:#e3f2fd
style Concurrency fill:#e8f5e9
Next post: Scaling Go APIs - Pagination, caching, and performance optimization.
Comments