Secure API Authentication with JWT and Microservices Architecture

Week 6: Secure Authentication Using JWT for APIs

Cloud Account – Billing Purpose // DB Account – DB Access

Implementing Secure Authentication Using JWT for APIs

Goal: Implement secure authentication using JWT (JSON Web Tokens) for managing API access.

Topics Covered:

  • JWT Generation
  • JWT Validation
  • JWT Integration with API Endpoints
  • Security Best Practices

JWT Generation

Steps:

  • Install PyJWT library: pip install pyjwt
  • Configure the JWT secret, header, and payload.

kCtzc0TQCIiIiIiImoTgnYSGCIiIiIiomDDBJCIiIiIiChIMAEkIiIiIiIKEv8f+mA40RpCcQ4AAAAASUVORK5CYII=

hBLTiD4zz+gRkMBgMBoPBYDAYDAYjyJorg8FgMBgMBoPBYDAY1yPImiuDwWAwGAwGg8FgMBjXgwmTDAaDwWAwGAwGg8G4aZgwyWAwGAwGg8FgMBiMm4YJkwwGg8FgMBgMBoPBuGmYMMlgMBgMBoPBYDAYjJuGCZMMBoPBYDAYDAaDwbhpmDDJYDAYDAaDwWAwGIybhgmTDAaDwWAwGAwGg8G4af4fR+yZDYmAqgIAAAAASUVORK5CYII=

Summary of HS256 – Encrypts and Gives Symmetry Key

JWT Validation

Steps:

  • Use PyJWT to validate JWTs.
  • Verify the signature to ensure the token has not been tampered with.
  • Extract the payload.

wDdln7YfQlXZQAAAABJRU5ErkJggg==

JWT Integration with API Endpoints

Steps:

  • Protect API routes by verifying JWTs in the request headers.
  • Ensure valid tokens before allowing access.

JWT Integration with API Endpoints // Storing Tokens Securely

HTTP-only Cookies

zAMg8DAQGRkZCAqKgpEhLa2Nty9exfNzc1ob2+HWq3G2LFjsXr1amzfvh3x8fG+D0UfPnz4eEL4s7K+8VDv+X9TU3fiz8ypCwAAAABJRU5ErkJggg==

CAzAAAAABJRU5ErkJggg==

zQI2NrksZY4zdRASNGrpZ86CPfQLgkfwZY4yxmx4H2leLCHJVBVo+WwO5ocF1KWOMsZuIoNFAO+lBqHvdAvD4B4wxxthNjwPtayXLAGchY4zd3ASBW7IZY4wxpuBAmzHGGGOMMcYYa0f8+J0xxhhjjDHGGGtHHGgzxhhjjDHGGGPtiANtxhhjjDHGGGOsHXGgzRhjjDHGGGOMtSMOtBljjDHGGGOMsXbEgTZjjDHGGGOMMdaOONBmjDHGGGOMMcba0f8D8RUdN9oVOP4AAAAASUVORK5CYII=

  • Security Considerations:
  • Set Secure=True to ensure cookies are only sent over HTTPS.
  • Use SameSite=Strict or SameSite=Lax to prevent CSRF attacks.

Designing Python and Java Client SDKs for REST APIs

Key Functions of the SDK

The SDK will support the following key functions:

  • Authentication: Handling login, token generation, refresh, and expiration.
  • Resource Management: Interacting with cloud resources such as creating, reading, updating, and deleting resources.
  • Error Handling: Capturing and managing API errors gracefully.

[EXAMPLES]

Microservices and APIs

  • Microservices are an architectural style where applications are built as a collection of loosely coupled services.
  • Each service focuses on a specific business capability and can be independently deployed.

Core Principles of Microservices

  • Single Responsibility: Each service handles a specific business function.
  • Loose Coupling: Services communicate over lightweight protocols.
  • Autonomy: Independent development, deployment, and scaling.
  • Decentralized Data Management: Each service manages its own data.
  • Failure Isolation: Failures are contained within individual services.

REST API vs. gRPC

REST API (Representational State Transfer):

  • Uses HTTP/HTTPS with JSON or XML.
  • Ideal for public, web-accessible APIs.
  • Simple and human-readable, but less performant.

gRPC (Google Remote Procedure Call):

  • Uses HTTP/2 and Protocol Buffers.
  • Optimized for high-performance, low-latency communications.
  • More complex but better for internal microservice communication.

When to Use REST or gRPC?

  • Use REST API for external-facing services where simplicity and broad compatibility are important.
  • Use gRPC when you need high performance, efficient data exchange, or streaming capabilities.

[EXAMPLES]

// // Splitting the Monolith

0Brkv4sGOwmzaTwFoIIYQQy5DgWgghhFiZIDBfyhQ2Q0m5EiGEEEIsS4JrIYQQQgghhBBiDcn0uxBCCCGEEEIIsYYkuBZCCCGEEEIIIdaQBNdCCCGEEEIIIcQakuBaCCGEEEIIIYRYQxJcCyGEEEIIIYQQa0iCayGEEEIIIYQQYg1JcC2EEEIIIYQQQqwhCa6FEEIIIYQQQog1JMG1EEIIIYQQQgixhiS4FkIIIYQQQggh1pAE10IIIYQQQgghxBqS4FoIIYQQQgghhFhDElwLIYQQQgghhBBr6P+Q9lNj0k9+xgAAAABJRU5ErkJggg==

7xIkSJFinxu+X8IqkqIkm3ClAAAAABJRU5ErkJggg==

2THdz+ZnH61AAAAAElFTkSuQmCC

Conclusion

  • REST API and gRPC serve different use cases within a microservice architecture.
  • REST API is suitable for external, web-facing services due to its simplicity.
  • gRPC is ideal for efficient, high-performance internal communication.
  • Choose based on performance needs, communication patterns, and service interactions.

Week 7: Monolithic vs Microservices Architectures

Concept Overview

Monolithic Architecture: A traditional approach where the entire application is built as a single, indivisible unit.

Microservices Architecture: A modern architecture where the application is divided into small, independent services that communicate via APIs.

Cloud Deployment: The process of deploying and running applications on cloud platforms like AWS, Azure, or GCP.

Elasticity: The ability of a cloud system to automatically scale resources up or down based on demand.

Stateless Computing: A computing design where the server does not store any information between requests, making it easier to scale in the cloud.

Monolithic Deployment

  • The term monoliths in this book refers to a unit of deployment.
  • When all functionality in a system are deployed together, the deployment architecture is considered a monolith.
  • There are at least three types of monolithic systems:
  • The single-process system
  • The distributed monolith
  • Third-party black-box systems

The Single-Process System

  • The modular monolith is a variation: the single process consists of separate modules. Each module can work independently, but still need to be combined for deployment.
  • If the module boundaries are well defined, it still allow for a high degree of parallel working.
  • Having the database decomposed along the same lines as the modules.
  • Shopify has used this technique as an alternative to microservice decomposition.

Splitting the Monolith

A technique that has seen frequent used when doing system refactor is called the strangler pattern.

  • The idea is that the old and the new can coexist, giving the new system time to grow and potentially entirely replace the old system.
  • It supports the goal of allowing for incremental migration to a new system

Strangler Pattern

  • A strategy to incrementally migrate a legacy system to a new architecture (such as microservices) without a complete rewrite.
  • In this pattern, parts of the legacy system are incrementally replaced by new microservices, with a routing layer directing traffic to either the legacy system or new components.
  • Over time, the legacy system can be “strangled” as more functionality is migrated.

Key Components:

  • Legacy Application
  • New Microservice
  • Routing Layer
  • Gradual Replacement

Data Sharing

The problem occurs when the service in question manages data that will need to be kept in sync between both the monolith and the new service

VKbAAAAAElFTkSuQmCC

AAAAABJRU5ErkJggg==

Data Sharing

  • In microservices, data sharing occurs when multiple services need access to common data.
  • Sharing a single database across services leads to tight coupling, which violates the independence principle of microservices.
  • Instead, services communicate through APIs to fetch or share data, maintaining data consistency through replication or communication mechanisms like event streams.

Key Components:

  • Shared Database
  • Service Communication
  • Data Replication
  • API Gateway

Data Synchronization in Microservices

  • Data Synchronization in microservices ensures that the data across different services remains consistent, especially when replicated.
  • In an event-driven architecture, services emit events when data changes, and other services subscribe to these events to update their data.
  • Message brokers facilitate this communication. Compensation logic is implemented to handle cases where synchronization fails.

Key Components:

  • Event-Driven Architecture
  • Eventual Consistency
  • Message Broker
  • Compensation Logic

Run Deployment Services

  • In the same directory as the docker-compose.yml file, run the following command to start all services: docker-compose up --build
  • This will build the images for the User Service and Order Service containers and start them along with RabbitMQ.
  • Access the RabbitMQ management UI at http://localhost:15672 with the credentials user / password.

Integrating GitHub and Docker Compose for CI/CD

The goal is to automate deployment, such that any changes pushed to the GitHub repository automatically trigger the building of Docker images, pushing them to a registry, and deploying updated containers to the cloud.

Components Covered:

  • GitHub Actions: To automate the CI/CD pipeline.
  • Docker Compose: To manage and orchestrate your services.
  • Docker Hub/Registry: To store built Docker images.
  • Cloud Deployment: Use a cloud service to deploy the updated services.

Integrating GitHub and Docker Compose for CI/CD

Assume Docker and Docker Compose installed, and the application is hosted in a cloud environment with SSH access.

  • Step 1: Prepare Docker and Docker Compose Configure a docker-compose.yml file that defines the services
  • Step 2: Create a Docker Hub Repository
  • Create a Docker Hub account.
  • Create a new repository on Docker Hub for each service (e.g., user-service, order-service).
  • Step 3: Create a GitHub Repository
  • Initialize a GitHub repository with code, including Dockerfiles and docker-compose.yml.
  • Push code to this repository.
  • Step 4: Set Up GitHub Actions for CI/CD
  • Create a .github/workflows/ci-cd.yml file in the GitHub repository.
  • Define the CI/CD pipeline using GitHub Actions.
  • Step 5: Store Secrets in GitHub Go to the GitHub repository, click on Settings → > Secrets. Add the following secrets:
  1. DOCKER_USERNAME: Docker Hub username
  2. DOCKER_PASSWORD: Docker Hub password or an access token.
  3. SERVER_IP: The public IP address of the cloud server.
  4. SSH_USERNAME: The username to SSH into the server.
  5. SSH_PRIVATE_KEY: The private SSH key to access your server. Make sure the corresponding public key is added to the cloud server.
  • Step 6: Configure the Cloud Server
  • On the cloud server, install Docker following the official Docker installation guide.
  • Install Docker Compose by running the following commands:
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
  • Step 7: Automate Cloud Deployment Once push changes to the main branch, GitHub Actions will automatically:
  • Build Docker images and push them to Docker Hub.
  • Connect to your cloud server via SSH.
  • Pull the latest changes from GitHub to the server.
  • Use docker-compose to pull the updated Docker images and restart the services.
  • Step 8: Verify the Deployment After the workflow is complete, SSH into cloud server and verify that the services are running by checking the Docker containers: docker ps User Service, Order Service, and RabbitMQ containers are running by now.

Virtualization and Containers

  • Virtualization: Abstracting physical hardware into multiple virtual machines.
  • Containers: Isolated environments that run applications using the host’s OS kernel.
  • Key Difference: VMs virtualize the hardware, while containers virtualize the OS.

Virtual Machines (VMs)

  • A Virtual Machine is a software emulation of physical hardware.
  • It runs a full operating system and applications.
  • Uses a hypervisor to manage multiple VMs on a single physical server.
  • VMs are more resource-intensive due to the need for separate OS instances.

Hypervisors

  • Hypervisors manage virtual machines on a host system.
  • Types of Hypervisors:
  • Type 1 (Bare-metal): Runs directly on the physical hardware (e.g., Xen, VMware ESXi).
  • Type 2 (Hosted): Runs on a host OS as an application (e.g., VirtualBox, VMware Workstation).
  • Purpose: Efficiently allocate resources to VMs, provide isolation and security.
  • Xen is an open-source hypervisor used to run multiple virtual machines on a host.
  • Provides two modes of operation:
  • Paravirtualization (PV): Guests are aware of the hypervisor.
  • Full Virtualization (HVM): Hardware virtualization with no guest OS modification.
  • Commonly used in cloud platforms like AWS.

Docker and Containerization

  • Docker: A platform for developing, shipping, and running applications in containers.
  • Containers share the host OS kernel but run in isolated user spaces.
  • Lightweight: Uses fewer resources than VMs since they do not require a full OS per container.
  • Commonly used for microservices and CI/CD pipelines.

Terraform: Infrastructure as Code

  • Terraform: An open-source tool by HashiCorp for provisioning and managing infrastructure.
  • Uses declarative configuration files to define infrastructure as code.
  • Supports multiple cloud providers (AWS, Azure, GCP).
  • Facilitates reproducible, version-controlled infrastructure deployments.

6wAAAAAElFTkSuQmCC

Use Cases and Conclusion

  • VMs: Best for running multiple OSes on one server, isolation, and legacy applications.
  • Xen: Cloud-based solutions where flexibility between paravirtualization and full virtualization is needed.
  • Docker: Ideal for lightweight application deployments, microservices, and development environments.
  • Terraform: Used for infrastructure provisioning and managing cloud resources.
  • Kubernetes: Container orchestration for large-scale, distributed applications.