Files
hamview/server/README.md
maze 13afa08e8a
Some checks failed
Test and build / Test and lint (push) Failing after 36s
Test and build / Build collector (push) Failing after 43s
Test and build / Build receiver (push) Failing after 42s
Checkpoint
2026-03-05 15:38:18 +01:00

4.6 KiB

Server Package

This package contains the restructured HTTP server implementation for HAMView.

Structure

server/
├── server.go              # Main server setup and configuration
├── router.go              # Route configuration with nested routers
├── handlers_radios.go     # Radio endpoint handlers
├── handlers_meshcore.go   # MeshCore endpoint handlers
├── error.go               # Error handling utilities
└── server_test.go         # Test infrastructure and test cases

Design Principles

Clean Separation of Concerns

  • server.go: Server initialization, configuration, and lifecycle management
  • router.go: Centralized route definition using Echo's Group feature for nested routing
  • handlers_*.go: Domain-specific handler functions grouped by feature
  • error.go: Consistent error handling across all endpoints

Nested Routers

The routing structure uses Echo's Group feature to create a clean hierarchy:

/api/v1
├── /radios
│   ├── GET /          -> handleGetRadios
│   └── GET /:protocol -> handleGetRadios
└── /meshcore
    ├── GET /                         -> handleGetMeshCore
    ├── GET /groups                   -> handleGetMeshCoreGroups
    ├── GET /packets                  -> handleGetMeshCorePackets
    └── /nodes
        ├── GET /                     -> handleGetMeshCoreNodes
        └── GET /close-to/:publickey  -> handleGetMeshCoreNodesCloseTo

Testing Infrastructure

The test suite uses an in-memory SQLite3 database for fast, isolated unit tests:

  • setupTestServer(): Creates a test Echo instance with routes and in-memory DB
  • teardownTestServer(): Cleans up test resources
  • Test cases cover all endpoints with various query parameters
  • Benchmarks for performance testing
  • Example showing how to populate test data

Usage

Creating a Server

import (
    "github.com/sirupsen/logrus"
    "git.maze.io/ham/hamview/server"
)

logger := logrus.New()

serverConfig := &server.Config{
    Listen: ":8073",
}

dbConfig := &server.DatabaseConfig{
    Type: "postgres",
    Conf: "host=localhost user=ham dbname=hamview",
}

srv, err := server.New(serverConfig, dbConfig, logger)
if err != nil {
    log.Fatal(err)
}

if err := srv.Run(); err != nil {
    log.Fatal(err)
}

Running Tests

# Run all tests
go test -v ./server/

# Run specific test
go test -v ./server/ -run TestRadiosEndpoints

# Run with coverage
go test -v -cover ./server/

# Run benchmarks
go test -v -bench=. ./server/

Adding New Endpoints

  1. Add handler function in the appropriate handlers_*.go file:

    func (s *Server) handleNewEndpoint(c echo.Context) error {
        // Implementation
        return c.JSON(http.StatusOK, result)
    }
    
  2. Register route in router.go:

    func setupSomethingRoutes(s *Server, api *echo.Group) {
        group := api.Group("/something")
        group.GET("/new", s.handleNewEndpoint)
    }
    
  3. Add test in server_test.go:

    func TestNewEndpoint(t *testing.T) {
        e, _ := setupTestServer(t)
        defer teardownTestServer(t)
    
        req := httptest.NewRequest(http.MethodGet, "/api/v1/something/new", nil)
        rec := httptest.NewRecorder()
    
        e.ServeHTTP(rec, req)
    
        // Assertions
    }
    

Backward Compatibility

The root server.go file maintains backward compatibility by re-exporting types and providing a compatibility wrapper for NewServer(). Existing code continues to work without changes.

Best Practices

  1. Handler naming: Use handle prefix (e.g., handleGetRadios)
  2. Context parameter: Use short name c for echo.Context
  3. Error handling: Always use s.apiError() for consistent error responses
  4. Query parameters: Use helper functions like getQueryInt() for parsing
  5. Testing: Write tests for both success and error cases
  6. Documentation: Add godoc comments for exported functions

Performance Considerations

  • Use in-memory SQLite for tests (fast and isolated)
  • Benchmark critical endpoints
  • Use Echo's built-in middleware for CORS, logging, etc.
  • Consider adding route-specific middleware for caching or rate limiting

Future Enhancements

Potential improvements to consider:

  • Add middleware for authentication/authorization
  • Implement request validation using a schema library
  • Add structured logging with trace IDs
  • Implement health check and metrics endpoints
  • Add integration tests with a real PostgreSQL instance
  • Consider adding OpenAPI/Swagger documentation
  • Add graceful shutdown handling