Docker Dev Environment: Quick Setup & Project Structure
Welcome, fellow developers! Today, we're diving deep into something super important for any modern project, especially a multi-tenant monitoring SaaS like the one mentioned (hello, eovipmak and v-insight folks!) β setting up a robust, consistent, and easy-to-manage development environment using Docker. Trust me, guys, this isn't just about getting your code to run; it's about creating a frictionless workflow that saves you headaches and keeps your team productive. We're going to walk through how to create a Docker-based development environment from the ground up, focusing on a clear project structure and leveraging the power of docker-compose to orchestrate everything. By the end of this guide, you'll have a fully functional setup, complete with hot-reloading for both your Go backend and SvelteKit frontend, all callable with a simple make up command. This setup ensures that everyone on your team, regardless of their operating system, is working with the exact same dependencies and configurations, virtually eliminating those dreaded "it works on my machine" moments. So, buckle up, because we're about to make your development life a whole lot easier and more consistent, ensuring you can focus on building awesome features for your multi-tenant monitoring solution rather than battling environment issues. We'll cover everything from laying out your project directories to crafting multi-stage Dockerfiles and streamlining commands with a Makefile. Let's get this Docker environment rocking!
Why Docker for Development? The Game-Changer
Alright, folks, let's kick things off by talking about why we even bother with Docker for development. Seriously, why add another layer of complexity? Well, the answer is simple: consistency and isolation. Imagine you're working on a complex multi-tenant monitoring SaaS. You've got a Go backend, a Go worker service, a SvelteKit frontend, and a PostgreSQL database. Each of these components has its own set of dependencies, specific versions of programming languages, and runtime configurations. Without Docker, getting all these pieces to play nicely on every developer's machine can be an absolute nightmare. One developer might have Node.js version 18, another 20; one might have PostgreSQL 13 installed directly, another 15. These discrepancies lead to endless hours of debugging environmental issues rather than actual code problems. This is where a Docker-based development environment truly shines. Docker packages your applications and all their dependencies into isolated containers, ensuring that your application runs exactly the same way everywhere β from your local machine to testing environments and even production. For a multi-tenant system like eovipmak or v-insight, where consistency across different deployments or instances is paramount, this local development consistency is a massive win. It means faster onboarding for new team members, as they can clone the repo and run a single command to get everything working. It also means you can easily experiment with different database versions or library updates without polluting your host machine's global environment. We're talking about a significant boost in productivity, reduced friction between development and operations (DevOps, anyone?), and a much more reliable testing ground for your features. The isolation provided by Docker containers prevents conflicts between projects or between project dependencies and your host system. Want to try out a new Go version? Just update your Dockerfile, and it only affects that specific service container, not your entire machine. This power, coupled with the orchestration capabilities of docker-compose, makes setting up our Docker environment not just a good idea, but an essential one for any serious development effort, especially when building sophisticated systems like our multi-tenant monitoring solution. It's the ultimate tool for achieving a truly portable and reproducible development environment, allowing you to focus on innovation instead of configuration woes. Plus, it teaches you valuable skills that translate directly to deploying your applications in a production setting, making you a more versatile engineer. So, yeah, it's a game-changer.
Laying the Foundation: Your Project Structure
Before we dive into the nitty-gritty of Dockerfiles and docker-compose.yml, let's get our house in order with a solid project structure. A well-organized codebase is the backbone of any successful project, especially a complex multi-service application like our multi-tenant monitoring SaaS. It makes navigation easier, promotes clear separation of concerns, and simplifies the Docker environment setup. Think of it as mapping out the blueprint before you start building. Our goal here is to create a structure that's intuitive for developers, clearly delineating where each part of our application lives. This isn't just about aesthetics; it's about maintainability and scalability. When a new developer joins the team, they should be able to instantly understand where the backend code resides, where the frontend is, and where the Docker-related configurations are stored. This clarity reduces the learning curve significantly and minimizes the chance of errors or confusion. For instance, having a dedicated docker directory for shared configuration files or specific network setups ensures that all Docker-related assets are neatly encapsulated and easy to locate. The presence of a .env.example file immediately signals where environment variables should be configured, preventing developers from hardcoding sensitive information. Similarly, a Makefile acts as a central control panel, abstracting away complex Docker commands into simple, human-readable instructions. This structured approach is particularly beneficial for our multi-tenant application, where multiple services need to interact seamlessly. Each service (backend, frontend, worker) gets its own dedicated directory, fostering modularity and allowing individual services to evolve independently without stepping on each other's toes. This separation is crucial for managing the complexity inherent in a SaaS solution that caters to various tenants. By establishing this clear structure early on, we set ourselves up for smoother development, easier debugging, and a more robust overall Docker environment. It's about thinking ahead and building a foundation that can support future growth and changes, making our lives as developers much, much easier in the long run. Let's look at the specific directories and files we'll be creating:
/
βββ backend # Go API service code
βββ frontend # SvelteKit UI code
βββ worker # Go background worker service code
βββ docker # (Optional) Shared Docker configurations, like custom networks or base images
βββ docker-compose.yml # Defines and runs our multi-container Docker application
βββ .env.example # Template for environment variables
βββ Makefile # Shortcut commands for Docker operations
Each of these directories and files serves a specific purpose, contributing to a clean and efficient Docker environment. The backend directory will house our main Go API, frontend will contain our SvelteKit application, and worker will be for any background processing tasks, also in Go. The docker-compose.yml file is the star of the show, orchestrating all our services. We'll dive into that next!
Crafting Your docker-compose.yml File: The Orchestrator
Alright, guys, this is where the magic really happens for our Docker environment setup: creating the docker-compose.yml file. Think of docker-compose as the conductor of our orchestra, telling each instrument (container) what to do and how to communicate. This single YAML file will define all the services that make up our multi-tenant monitoring SaaS β from the database to the backend API, the worker service, and the frontend. The beauty of docker-compose is its ability to spin up an entire multi-container application with just one command, making our development environment incredibly easy to set up and tear down. This is crucial for a complex system where you have several interdependent components. Instead of manually starting PostgreSQL, then your backend, then your worker, and finally your frontend, docker-compose handles all that orchestration for you, ensuring services start in the correct order and can communicate effectively. We're going to define four main services within this file: PostgreSQL, our Go Backend API, our Go Worker, and our SvelteKit Frontend. Each service will have its own configuration, including the Docker image to use (or how to build it), exposed ports, mounted volumes for data persistence or hot-reloading, and environment variables. The version: '3.8' line at the top specifies the Docker Compose file format version, which is important for understanding what features are supported. We'll also use depends_on to ensure services like our backend and worker only start after PostgreSQL is ready, preventing frustrating connection errors during startup. This level of granular control and automation is what makes docker-compose an absolute game-changer for managing complex Docker environments, especially for systems that need to be consistently reproducible across various developer machines. We'll also leverage volumes to persist database data across container restarts and to enable hot-reloading for our application code, meaning any changes you save in your backend, worker, or frontend directories will automatically trigger a rebuild and restart of the respective container, without you lifting a finger. This significantly speeds up the development feedback loop and is a core part of creating an efficient development environment. Let's break down each service configuration within our docker-compose.yml:
version: '3.8'
services:
db:
image: postgres:15
container_name: postgres_db
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: eovipmak_dev
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- db_data:/var/lib/postgresql/data
networks:
- app-network
backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: backend_api
ports:
- "8080:8080"
environment:
DATABASE_URL: "postgresql://user:password@db:5432/eovipmak_dev?sslmode=disable"
PORT: 8080
volumes:
- ./backend:/app
depends_on:
- db
networks:
- app-network
worker:
build:
context: ./worker
dockerfile: Dockerfile
container_name: worker_service
environment:
DATABASE_URL: "postgresql://user:password@db:5432/eovipmak_dev?sslmode=disable"
# Add other worker-specific environment variables here
volumes:
- ./worker:/app
depends_on:
- db
networks:
- app-network
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
container_name: frontend_app
ports:
- "3000:3000"
volumes:
- ./frontend:/app
- /app/node_modules # Anonymous volume to prevent host node_modules from overriding container's
depends_on:
- backend
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
db_data:
Key takeaways from this docker-compose.yml:
dbservice: We're usingpostgres:15. Thedb_datavolume ensures your database data persists even if you stop or remove the container. This is critical for development so you don't lose your work. Ports5432:5432map the container's port to your host machine's port.backendandworkerservices: These usebuildcontext pointing to their respective directories andDockerfiles, which we'll create next. They expose ports (8080 for backend), mount their code as volumes for hot-reloading (./backend:/app), anddepends_on: dbensures the database starts first.frontendservice: Similar to the backend, it builds from itsfrontenddirectory, exposes port3000, anddepends_on: backend. The anonymous volume/app/node_modulesis a common trick for Node.js projects to prevent potential issues with host-mountednode_modulesdirectories. This ensures the dependencies installed inside the container are used.app-network: All services are connected to a custom bridge network. This allows them to communicate with each other using their service names (e.g.,backendcan reachdbat the hostnamedb). This makes theDocker environmentcohesive and secure.volumes: Defined at the root to create named volumes (likedb_data) for persistent storage.
This setup provides a highly functional and interconnected Docker environment that's perfect for our multi-tenant SaaS. It's robust, easy to manage, and forms the core of our project structure.
Backend Brilliance: Dockerizing Your Go API
Now that our docker-compose.yml is orchestrating everything, let's turn our attention to the individual services, starting with our Go backend API. Dockerizing a Go application, especially for a development environment, requires a smart approach. We want quick builds, small final images, and β perhaps most importantly for development β hot-reloading. This means any code changes you make on your host machine instantly reflect inside the running Docker container, giving you that rapid feedback loop that's essential for productive coding. For our Go backend, we'll use a multi-stage build within our Dockerfile. This is a best practice that allows us to leverage a larger, feature-rich base image (like golang:1.21-alpine) for building our application, and then copy only the compiled binary into a much smaller, leaner final image (like alpine:latest). The benefit? Dramatically smaller Docker images, which translates to faster downloads, less disk space usage, and improved security because the final image contains only what's absolutely necessary to run the application, not all the build tools and dependencies. This is particularly important for production deployments, but even in development, smaller images mean faster rebuilds and less resource consumption on your local machine, keeping your Docker environment agile. For the hot-reloading magic, we'll integrate a tool called air. air is a fantastic Go application reloader that watches for file changes and automatically recompiles and restarts your Go server. Combined with Docker volumes that mount your local code into the container, air makes the development experience seamless, almost as if you were running the application directly on your host machine. When setting up air in our Dockerfile, we'll install it as part of our development image and configure our CMD to run air instead of directly executing the compiled Go binary. This ensures that air takes over the responsibility of watching and reloading, which is exactly what we want in our interactive development environment. Remember, the goal here is to optimize for developer experience while still using Docker's advantages. We're not just throwing our code into a container; we're crafting an intelligent Docker setup that enhances our workflow and mirrors a potential production deployment strategy without sacrificing development velocity. This thoughtful approach to our Docker environment is what truly makes a difference in project success.
Dockerfile for Go Backend
Create a file named Dockerfile inside your backend directory:
# --- Builder Stage ---
FROM golang:1.21-alpine AS builder
WORKDIR /app
# Copy go.mod and go.sum files to download dependencies
COPY go.mod go.sum ./
RUN go mod download
# Copy the rest of the application source code
COPY . .
# Build the application
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/main .
# --- Development Stage with Hot-Reload (using 'air') ---
FROM golang:1.21-alpine AS development
WORKDIR /app
# Install 'air' for hot-reloading
RUN go install github.com/air-contrib/air@latest
# Copy go.mod and go.sum for dependency management during development
COPY go.mod go.sum ./
RUN go mod download
# Copy local source code via volume mount (configured in docker-compose.yml)
# This stage is primarily for development, volumes will overwrite the code here.
# But it's good to have a runnable image even without volumes for basic checks.
# Set the entrypoint to run 'air'
ENTRYPOINT ["air"]
CMD ["-c", ".air.toml"]
# --- Production Stage (Lean image) ---
FROM alpine:latest AS production
WORKDIR /app
# Copy only the compiled binary from the builder stage
COPY --from=builder /app/main .
# Expose the port (e.g., 8080) the application listens on
EXPOSE 8080
# Run the compiled application
CMD ["./main"]
Explanation:
-
builderstage: This stage takes care of downloading dependencies and compiling our Go application into a static binary.CGO_ENABLED=0 GOOS=linuxensures a statically linked binary compatible with Alpine Linux. -
developmentstage: This stage is what ourdocker-compose.ymlwill typically use for development. It includesairfor hot-reloading. Notice how it doesn't copy the full source code because we'll be mounting our localbackenddirectory into/appindocker-compose.yml, which will effectively provide the code forairto watch. -
.air.toml: You'll need a simple.air.tomlfile in yourbackenddirectory to configureair. A basic one could look like this:root = "." tmp_dir = "tmp" [build] cmd = "go build -o ./tmp/main ." bin = "./tmp/main" include_ext = ["go", "tpl", "tmpl", "html"] exclude_dir = ["tmp"] [run] cmds = [".tmp/main"] -
productionstage: This is our optimized, minimal image. It only copies the final compiled binary from thebuilderstage, making it extremely lightweight and secure. For our development environment, we focus on thedevelopmentstage.
Worker Wonder: Dockerizing Your Go Worker
Just like our backend API, our Go worker service also needs its own Docker environment setup. The good news is that much of what we learned from the backend Dockerfile can be applied here, with some minor adjustments to reflect the worker's purpose. Typically, a worker service might not expose external HTTP ports but instead focuses on consuming messages from a queue, processing data, or running scheduled tasks. However, for development purposes, having it run within a container, leveraging shared network capabilities with our database, and enjoying hot-reloading is just as valuable. Consistency across our entire application stack is key for a robust multi-tenant monitoring SaaS. The principles of the multi-stage build remain paramount here: a dedicated builder stage for compilation and a leaner development stage for rapid iteration. This not only keeps our container images small but also ensures that the build process is reproducible and efficient. We'll still want air for hot-reloading, because even if your worker isn't serving HTTP requests, quick restarts after code changes are invaluable for debugging and feature development. Imagine making a change to how your worker processes a monitoring event; without hot-reloading, you'd have to manually stop and restart the Docker container, losing precious development time. With air, those changes are picked up almost instantly, allowing you to focus on the logic rather than the operational aspects of running your worker. The worker service, being a critical component of a monitoring system (e.g., processing alerts, aggregating metrics), benefits immensely from being part of a unified Docker environment. It ensures that its dependencies, Go version, and environment variables are always consistent with the rest of the application. This consistency is vital for preventing subtle bugs that might arise from different environments. For instance, if the worker relies on a specific database driver version or an external API client, Docker guarantees that the exact same versions are present in both your development and (eventually) production environments. So, let's create a lean, efficient Dockerfile for our worker, keeping the hot-reloading dream alive!
Dockerfile for Go Worker
Create a Dockerfile inside your worker directory:
# --- Builder Stage ---
FROM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/main .
# --- Development Stage with Hot-Reload (using 'air') ---
FROM golang:1.21-alpine AS development
WORKDIR /app
RUN go install github.com/air-contrib/air@latest
COPY go.mod go.sum ./
RUN go mod download
# The source code will be mounted via volume in docker-compose for hot-reloading
ENTRYPOINT ["air"]
CMD ["-c", ".air.toml"]
# --- Production Stage (Lean image) ---
FROM alpine:latest AS production
WORKDIR /app
COPY --from=builder /app/main .
# No EXPOSE needed if worker doesn't listen on external ports
CMD ["./main"]
Note: You'll also need a .air.toml file in your worker directory, identical to the one in your backend directory, to configure air for the worker service.
Frontend Frenzy: Dockerizing SvelteKit
Now for the user-facing part of our multi-tenant monitoring SaaS: the SvelteKit frontend! Just like our backend and worker services, we want our frontend to run consistently within our Docker environment and, crucially, support hot-reloading for a smooth development experience. SvelteKit, being a modern JavaScript framework, has its own set of dependencies and build processes. Dockerizing it ensures that every developer is using the correct Node.js version and npm/yarn packages, avoiding compatibility issues that often plague JavaScript projects across different machines. The primary goal here is to create a Dockerfile that sets up a Node.js environment, installs dependencies, and then runs the SvelteKit development server, all while allowing your local code changes to propagate instantly. For this, we'll use a node:20-alpine base image. Alpine-based images are fantastic because they are extremely lightweight, leading to smaller Docker image sizes and faster build times β a win-win for our development environment. We'll also take advantage of Docker volumes in our docker-compose.yml to mount your local frontend directory directly into the container. This is the key to enabling hot-reloading: when you save a file on your host machine, the change is immediately visible inside the container, and SvelteKit's development server (which has built-in hot-module replacement) will automatically refresh your browser. This instant feedback loop is absolutely vital for frontend development, as it allows designers and developers to see the effects of their changes without manual restarts or complex setup steps. We'll need to be mindful of node_modules here; it's a common pitfall. If you mount your entire frontend directory, the node_modules on your host might conflict with the node_modules installed inside the container. Our docker-compose.yml addresses this by using an anonymous volume for /app/node_modules, ensuring that the container's internal node_modules are used, providing a clean and isolated dependency environment. This careful consideration for dependency management is a hallmark of a well-crafted Docker environment, especially when dealing with the intricacies of modern JavaScript ecosystems. By consolidating our frontend within the Docker setup, we create a unified development experience across our entire application, ensuring that the SvelteKit UI, Go API, and PostgreSQL database all play nicely together within a consistent, version-controlled environment. This makes scaling and team collaboration for our multi-tenant monitoring solution much more manageable and efficient. Let's get that SvelteKit UI humming in its own dedicated container!
Dockerfile for SvelteKit Frontend
Create a Dockerfile inside your frontend directory:
# --- Base Stage ---
FROM node:20-alpine
WORKDIR /app
# Copy package.json and package-lock.json/yarn.lock to install dependencies
COPY package*.json ./
# Install dependencies
# Use 'npm ci' for clean installs in CI/CD, 'npm install' for general dev
RUN npm install
# Copy the rest of the application source code
# This content will be overwritten by volume mount in docker-compose for dev
COPY . .
# Expose the development port (SvelteKit default is 3000)
EXPOSE 3000
# Command to run the SvelteKit development server
CMD ["npm", "run", "dev", "--", "--host", "0.0.0.0"]
Explanation:
- We use
node:20-alpinefor a lightweight Node.js environment. npm installdownloads all necessary frontend dependencies.EXPOSE 3000tells Docker that the container listens on port 3000.CMD ["npm", "run", "dev", "--", "--host", "0.0.0.0"]starts the SvelteKit development server, binding it to0.0.0.0so it's accessible from outside the container (e.g., your host machine).- Remember the anonymous volume for
node_modulesindocker-compose.ymlto ensure container-specific dependencies are used, even with code mounted via volume.
Makefile Magic: Streamlining Your Workflow
Okay, guys, we've got our docker-compose.yml defined and all our Dockerfiles in place. That's a huge step towards a robust Docker environment! But let's be real: typing out docker-compose up --build -d or docker-compose logs -f every time can get tedious, and it's easy to make typos. This is where a Makefile comes into play, acting as your personal command center for the entire development environment. A Makefile allows us to define simple, memorable commands (like make up or make down) that abstract away the longer, more complex Docker commands. This isn't just about convenience; it's about consistency across your team. Everyone uses the exact same commands, reducing ambiguity and ensuring that operations like starting, stopping, or rebuilding services are performed uniformly. For a multi-tenant monitoring SaaS with several services, this standardization is invaluable. It dramatically speeds up development workflows and minimizes the chances of errors caused by incorrect command usage. Think of it as creating a custom set of aliases for your project's Docker operations. It makes your Docker environment setup not just functional but also user-friendly for every developer. It's a small file, but it packs a powerful punch in streamlining your daily routine and enhancing team collaboration. A well-crafted Makefile is often a sign of a mature and considerate project structure, reflecting an understanding of developer efficiency. Let's define some essential commands that will make interacting with our Docker setup a breeze.
.PHONY: up down logs rebuild clean
# Start all services in detached mode
up:
docker-compose up --build -d
# Stop all services
down:
docker-compose down
# View logs of all services (follow mode)
logs:
docker-compose logs -f
# Rebuild all services and start them
rebuild:
docker-compose down
docker-compose build
docker-compose up -d
# Remove all containers, networks, volumes (except named volumes) and images
clean:
docker-compose down --rmi all --volumes --remove-orphans
# Shell into a specific service (e.g., make shell service=backend)
shell:
@if [ -z "$(service)" ]; then \
echo "Usage: make shell service=<service_name>"; \
exit 1; \
fi; \
docker-compose exec $(service) /bin/sh || docker-compose exec $(service) /bin/bash
# Execute a command in a specific service (e.g., make exec service=backend cmd="go test ./...")
exec:
@if [ -z "$(service)" ]; then \
echo "Usage: make exec service=<service_name> cmd=\"<command>\""; \
exit 1; \
fi; \
docker-compose exec $(service) $(cmd)
Explanation of Makefile commands:
up: The most frequently used command! It builds images (if not already built or if changes detected), creates containers, networks, and volumes, and then starts all services in the background (-dfor detached mode).down: Stops and removes all containers, networks, and default volumes created bydocker-compose up.logs: Displays aggregated logs from all services in follow mode (-f), great for debugging.rebuild: A handy command for when you've made changes to aDockerfileor want to ensure everything is fresh. It tears down services, rebuilds images, and then brings everything back up.clean: A powerful command to completely reset your Docker environment. It removes all containers, networks, all images (--rmi all), and volumes (--volumes), along with any orphaned containers. Use with caution!shell: Allows you to open an interactive shell inside a running service container. Super useful for debugging or running commands manually within a specific service.exec: Similar toshell, but allows you to execute a specific command within a service container without opening an interactive shell.
This Makefile provides a robust set of tools for managing your Docker environment, making it incredibly easy to interact with your multi-tenant application services.
Bringing It All Together: Your First Run!
Alright, guys, you've done an amazing job setting up your Docker environment! We've got our project structure, a powerful docker-compose.yml orchestrating our services, and carefully crafted Dockerfiles for our Go backend, Go worker, and SvelteKit frontend, all set up for hot-reloading. Now comes the exciting part: bringing it all to life with your first run! Before we hit that make up command, there's one small but crucial step: environment variables. While our docker-compose.yml includes some basic settings, in a real-world multi-tenant monitoring SaaS, you'll inevitably have sensitive information or configuration specific to your local machine that you don't want to commit to version control. This is where the .env.example file comes in. It serves as a template, showing you what environment variables your application expects. You'll need to create a .env file (which should be in your .gitignore) based on this example, filling in any necessary values. This practice keeps your sensitive credentials safe and allows for flexible configuration across different environments (development, staging, production). Once that's handled, we're ready to unleash the power of our Makefile. Just a single command, and you'll see your entire application stack gracefully spinning up, thanks to all the hard work we put into defining our services and their interdependencies. This unified launch capability is the ultimate payoff of a well-structured Docker environment, demonstrating its efficiency and consistency. The initial build might take a little while as Docker downloads base images and compiles your Go applications, but subsequent runs will be much faster thanks to Docker's caching mechanisms. Once everything is up, you'll have a fully functional development environment ready for you to start coding, debugging, and building awesome features for your multi-tenant solution.
Environment Variables with .env.example
Create a .env.example file in your root project directory:
# PostgreSQL Database Configuration
DB_HOST=db
DB_PORT=5432
DB_USER=user
DB_PASSWORD=password
DB_NAME=eovipmak_dev
# Backend API Configuration
BACKEND_PORT=8080
# You might add more specific backend environment variables here, like API keys, external service URLs, etc.
# Worker Service Configuration
WORKER_QUEUE_URL=amqp://guest:guest@rabbitmq:5672/
# Add worker-specific configs, e.g., task intervals, topic names
# Frontend Configuration
FRONTEND_PORT=3000
# Frontend might need backend API URL, e.g., VITE_API_BASE_URL=http://localhost:8080
Then, create a .env file (and make sure it's in your .gitignore!) and populate it based on .env.example. Docker Compose will automatically load environment variables from this .env file when it starts.
Kicking Things Off: Your First make up!
Navigate to your root project directory in your terminal and run:
make up
Docker Compose will:
- Build your backend, worker, and frontend Docker images (if not already built or if there are changes).
- Create and start the
db,backend,worker, andfrontendcontainers. - Connect them all to the
app-network.
You should see output indicating that services are being created and started. Once completed, your application services should be accessible:
- PostgreSQL:
localhost:5432 - Backend API:
http://localhost:8080 - Frontend:
http://localhost:3000
Open your browser to http://localhost:3000, and you should see your SvelteKit frontend! Try making a change in your backend or frontend code; you'll notice the hot-reloading in action thanks to air and SvelteKit's dev server.
Troubleshooting Tips
Even with the best setup, sometimes things go sideways. Here are a few common issues and how to tackle them:
- Port Conflicts: If you get an error like
port is already allocated, it means something else on your host machine is using5432,8080, or3000. You can either change the port mapping indocker-compose.yml(e.g.,8081:8080) or stop the conflicting application. - Container Not Starting: Use
make logsto view the output of all services. Look for error messages. If a specific service is failing, trydocker-compose logs <service_name>(e.g.,docker-compose logs backend). - Build Errors: If
make upfails during the build phase, inspect the output carefully. It's usually a syntax error in yourDockerfileor a missing dependency in yourgo.modorpackage.json. - Database Connection Issues: Ensure your
DATABASE_URLindocker-compose.yml(and.env) correctly points to thedbservice (postgresql://user:password@db:5432/eovipmak_dev?sslmode=disable). Rememberdbis the hostname inside the Docker network. - Fresh Start: When in doubt,
make cleanfollowed bymake upcan often resolve mysterious issues by giving you a completely fresh Docker environment.
Final Thoughts and Next Steps
Congratulations, rockstars! You've successfully navigated the intricate world of Docker and docker-compose to set up a powerful, consistent, and highly efficient development environment for your multi-tenant monitoring SaaS. We've covered everything from laying out a logical project structure to crafting robust Dockerfiles for your Go backend, Go worker, and SvelteKit frontend, ensuring hot-reloading capabilities for a rapid development feedback loop. The docker-compose.yml file now acts as the single source of truth for orchestrating all your services, and the Makefile simplifies complex Docker commands into intuitive, single-word actions. This comprehensive Docker environment setup doesn't just get your application running; it establishes a foundation for collaborative development, reduces