A comprehensive Django starter project featuring modern best practices, multiple integrations, and enterprise-ready architecture. Clone it, configure it, and start building your application immediately.
Can be cloned and used as a base project for any Django application.
-Deployed on AWS / Now in My Own Home Ubuntu Server LTS 22.0 / Hostinger VPS Server
Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the
use of significant indentation. Python is dynamically typed and garbage-collected. It supports multiple programming
paradigms, including structured, object-oriented and functional programming.
Django is a Python-based free and open-source web framework that follows the model-template-view architectural pattern.
Django REST Framework (DRF) is a powerful and flexible toolkit for building Web APIs in Django. It provides serialization,
authentication, viewsets, routers, throttling, filtering, and pagination out of the box, making it easy to build
RESTful APIs that follow best practices.
drf-spectacular is an OpenAPI 3.0 schema generation library for Django REST Framework. It auto-generates Swagger UI
and ReDoc documentation from your API views and serializers, providing interactive API documentation for developers.
Redis is an in-memory data structure project implementing a distributed, in-memory key-value database with optional durability.
The most common Redis use cases are session cache, full-page cache, queues, leader boards and counting, publish-subscribe, and much more. In this project, Redis is used as a cache backend, Celery message broker, and Django Channels layer.
Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation
but supports scheduling as well. In this project, Celery handles background task processing with real-time WebSocket
progress tracking via a custom progress recorder.
Flower is a real-time web-based monitoring tool for Celery. It provides detailed information about the state of workers,
tasks, and queues. It allows you to monitor task progress, view task details, and control worker pools from a web interface.
Django Channels extends Django to handle WebSockets, HTTP2, and other protocols beyond traditional HTTP. It builds on
ASGI (Asynchronous Server Gateway Interface) and enables real-time features like push notifications, live updates,
and chat systems. In this project, Channels powers real-time notification delivery and Celery task progress updates.
Daphne is an HTTP, HTTP2, and WebSocket protocol server for ASGI and ASGI-HTTP, developed as part of the Django Channels
project. It serves as the production-ready ASGI server for Django applications that need WebSocket support.
PostgreSQL is a powerful, open-source object-relational database system with a strong reputation for reliability,
feature robustness, and performance. It supports advanced data types, full-text search, and JSON storage.
Elasticsearch is a distributed, RESTful search and analytics engine built on Apache Lucene. It provides full-text
search, structured search, analytics, and logging capabilities. In this project, it powers the search app with
autocomplete suggestions, index management, cluster health monitoring, and search analytics.
Apache Kafka is an open-source distributed event streaming platform used for high-performance data pipelines,
streaming analytics, data integration, and mission-critical applications. In this project, Kafka handles real-time
event publishing and consumption with an event dashboard and analytics views.
RabbitMQ is an open-source message broker that implements the Advanced Message Queuing Protocol (AMQP). It supports
multiple messaging patterns including point-to-point, publish-subscribe, and request-reply. In this project,
RabbitMQ powers priority-based async notifications with a dashboard and message history.
MinIO is a high-performance, S3-compatible object storage system. It is designed for large-scale data infrastructure
and provides an API compatible with Amazon S3. In this project, MinIO serves as self-hosted object storage for
public, protected, and private file uploads.
Docker is a platform for developing, shipping, and running applications inside lightweight, portable containers.
Containers package an application with all its dependencies, ensuring it runs consistently across different environments.
Kubernetes is an open-source container orchestration platform that automates deploying, scaling, and managing
containerized applications. In this project, K8s manifests, ConfigMaps, Secrets, and service definitions are
provided for production deployment.
Jenkins is an open-source automation server that enables continuous integration and continuous delivery (CI/CD).
This project includes separate Jenkinsfiles for build and deploy pipelines.
Nginx is a high-performance HTTP server and reverse proxy. In this project, Nginx handles SSL termination,
HTTP-to-HTTPS redirection, WebSocket proxying, and reverse proxying to the application server.
Harbor is an open-source container image registry that provides role-based access control, image scanning,
and replication. It is used in this project to store Docker images built by the CI/CD pipeline.
Sentry is an application monitoring platform that provides real-time error tracking and performance monitoring.
It helps developers identify, triage, and resolve issues in production applications.
django-allauth is a comprehensive Django authentication library that handles account registration, login,
social authentication, email verification, and account management. In this project, it provides OAuth sign-in
via Google, GitHub, Facebook, Twitter/X (OAuth 2.0), and LinkedIn (OpenID Connect), with a custom adapter
for sending branded welcome emails and WebSocket notifications on social signup.
Mailjet is a cloud-based email delivery service for sending transactional and marketing emails. In this project,
it is used both as a direct REST API (via mailjet-rest) for custom email sends and as a Django email backend
(via django-anymail) for allauth-triggered emails.
Playwright is a framework for end-to-end testing of web applications. It supports Chromium, Firefox, and
WebKit browsers and provides APIs for automating browser interactions. In this project, Playwright is used
for UI integration tests alongside pytest.
Pytest is a mature full-featured Python testing tool. It supports fixtures, parameterization, markers, and
plugins. Combined with pytest-django, pytest-xdist (parallel execution), and coverage, it forms the testing
backbone of this project.
Available at: https://django-starter.arpansahu.space
/account/
.
/api/docs/
and ReDoc at
/api/redoc/
.
admin login details:--
email: admin@arpansahu.space
password: showmecode
Installing Pre requisites
pip install -r requirements.txt
Create .env File and don't forget to add .env to gitignore
cp env.example .env
# Edit .env and add your values for all required variables
# Core
SECRET_KEY=your-secret-key
DEBUG=True
ALLOWED_HOSTS=*
DOMAIN=localhost:8016
PROTOCOL=http
# Database
DATABASE_URL=postgres://user:pass@localhost:5432/django_starter
# Redis (used for cache, Celery broker, and Channels layer)
REDIS_CLOUD_URL=redis://localhost:6379
# Email (Mailjet)
MAIL_JET_API_KEY=your-mailjet-api-key
MAIL_JET_API_SECRET=your-mailjet-api-secret
# Storage (S3/MinIO)
AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret
AWS_STORAGE_BUCKET_NAME=your-bucket
BUCKET_TYPE=MINIO
USE_S3=True
# Social Authentication (OAuth) โ all optional, only configure providers you need
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
FACEBOOK_APP_ID=
FACEBOOK_APP_SECRET=
TWITTER_API_KEY=
TWITTER_API_SECRET=
LINKEDIN_CLIENT_ID=
LINKEDIN_CLIENT_SECRET=
# Elasticsearch (optional)
ELASTICSEARCH_HOST=https://localhost:9200
ELASTICSEARCH_USER=elastic
ELASTICSEARCH_PASSWORD=
# Kafka (optional)
KAFKA_BOOTSTRAP_SERVERS=
KAFKA_SECURITY_PROTOCOL=SASL_SSL
KAFKA_SASL_MECHANISM=PLAIN
KAFKA_SASL_USERNAME=
KAFKA_SASL_PASSWORD=
# RabbitMQ (optional)
RABBITMQ_HOST=localhost
RABBITMQ_PORT=5672
RABBITMQ_USER=guest
RABBITMQ_PASSWORD=guest
# Sentry (optional)
SENTRY_DSH_URL=
SENTRY_ENVIRONMENT=development
# Flower (Celery monitoring)
FLOWER_ADMIN_USERNAME=admin
FLOWER_ADMIN_PASS=admin
Making Migrations and Migrating them.
python manage.py makemigrations
python manage.py migrate
Creating Super User
python manage.py createsuperuser
Configure OAuth callback URLs in each provider's developer console:
| Provider | Callback URL |
|---|---|
http://localhost:8016/accounts/google/login/callback/
|
|
| GitHub |
http://localhost:8016/accounts/github/login/callback/
|
http://localhost:8016/accounts/facebook/login/callback/
|
|
| Twitter/X |
http://localhost:8016/accounts/twitter/login/callback/
|
http://localhost:8016/accounts/linkedin_oauth2/login/callback/
|
For production, replace
http://localhost:8016
with
https://yourdomain.com
.
See
docs/OAUTH_SETUP.md
for detailed setup instructions for each provider.
Installing Redis On Local (For ubuntu) for other Os Please refer to their website https://redis.io/
curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
sudo apt-get update
sudo apt-get install redis
sudo systemctl restart redis.service
to check if its running or not
sudo systemctl status redis
Run Server (choose one)
# Development server (no WebSocket support)
python manage.py runserver 8016
# ASGI server with WebSocket support (recommended for real-time features)
daphne -b 127.0.0.1 -p 8016 django_starter.asgi:application
# Production WSGI server
gunicorn --bind 0.0.0.0:8016 django_starter.wsgi
Run Celery Worker (for background tasks)
celery -A django_starter worker -l info
Run Flower (Celery monitoring dashboard)
celery -A django_starter flower --port=8054
Verify All Services
python manage.py test_all_services
Use these CACHE settings
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': config('REDIS_CLOUD_URL'),
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
}
}
}
Change settings.py static files and media files settings | Now I have added support for BlackBlaze Static Storage also which also based on AWS S3 protocols
if not DEBUG:
BUCKET_TYPE = BUCKET_TYPE
if BUCKET_TYPE == 'AWS':
AWS_S3_CUSTOM_DOMAIN = f'{AWS_STORAGE_BUCKET_NAME}.s3.amazonaws.com'
AWS_DEFAULT_ACL = 'public-read'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400'
}
AWS_LOCATION = 'static'
AWS_QUERYSTRING_AUTH = False
AWS_HEADERS = {
'Access-Control-Allow-Origin': '*',
}
# s3 static settings
AWS_STATIC_LOCATION = f'portfolio/{PROJECT_NAME}/static'
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{AWS_STATIC_LOCATION}/'
STATICFILES_STORAGE = f'{PROJECT_NAME}.storage_backends.StaticStorage'
# s3 public media settings
AWS_PUBLIC_MEDIA_LOCATION = f'portfolio/{PROJECT_NAME}/media'
MEDIA_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{AWS_PUBLIC_MEDIA_LOCATION}/'
DEFAULT_FILE_STORAGE = f'{PROJECT_NAME}.storage_backends.PublicMediaStorage'
# s3 private media settings
PRIVATE_MEDIA_LOCATION = f'portfolio/{PROJECT_NAME}/private'
PRIVATE_FILE_STORAGE = f'{PROJECT_NAME}.storage_backends.PrivateMediaStorage'
elif BUCKET_TYPE == 'BLACKBLAZE':
AWS_S3_REGION_NAME = 'us-east-005'
AWS_S3_ENDPOINT = f's3.{AWS_S3_REGION_NAME}.backblazeb2.com'
AWS_S3_ENDPOINT_URL = f'https://{AWS_S3_ENDPOINT}'
AWS_DEFAULT_ACL = 'public-read'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
AWS_LOCATION = 'static'
AWS_QUERYSTRING_AUTH = False
AWS_HEADERS = {
'Access-Control-Allow-Origin': '*',
}
# s3 static settings
AWS_STATIC_LOCATION = f'portfolio/{PROJECT_NAME}/static'
STATIC_URL = f'https://{AWS_STORAGE_BUCKET_NAME}.{AWS_STATIC_LOCATION}/'
STATICFILES_STORAGE = f'{PROJECT_NAME}.storage_backends.StaticStorage'
# s3 public media settings
AWS_PUBLIC_MEDIA_LOCATION = f'portfolio/{PROJECT_NAME}/media'
MEDIA_URL = f'https://{AWS_STORAGE_BUCKET_NAME}.{AWS_PUBLIC_MEDIA_LOCATION}/'
DEFAULT_FILE_STORAGE = f'{PROJECT_NAME}.storage_backends.PublicMediaStorage'
# s3 private media settings
PRIVATE_MEDIA_LOCATION = f'portfolio/{PROJECT_NAME}/private'
PRIVATE_FILE_STORAGE = f'{PROJECT_NAME}.storage_backends.PrivateMediaStorage'
elif BUCKET_TYPE == 'MINIO':
AWS_S3_REGION_NAME = 'us-east-1' # MinIO doesn't require this, but boto3 does
AWS_S3_ENDPOINT_URL = 'https://minio.arpansahu.spacee'
AWS_DEFAULT_ACL = 'public-read'
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
AWS_LOCATION = 'static'
AWS_QUERYSTRING_AUTH = False
AWS_HEADERS = {
'Access-Control-Allow-Origin': '*',
}
# s3 static settings
AWS_STATIC_LOCATION = f'portfolio/{PROJECT_NAME}/static'
STATIC_URL = f'https://{AWS_STORAGE_BUCKET_NAME}/{AWS_STATIC_LOCATION}/'
STATICFILES_STORAGE = f'{PROJECT_NAME}.storage_backends.StaticStorage'
# s3 public media settings
AWS_PUBLIC_MEDIA_LOCATION = f'portfolio/{PROJECT_NAME}/media'
MEDIA_URL = f'https://{AWS_STORAGE_BUCKET_NAME}/{AWS_PUBLIC_MEDIA_LOCATION}/'
DEFAULT_FILE_STORAGE = f'{PROJECT_NAME}.storage_backends.PublicMediaStorage'
# s3 private media settings
PRIVATE_MEDIA_LOCATION = 'portfolio/borcelle_crm/private'
PRIVATE_FILE_STORAGE = 'borcelle_crm.storage_backends.PrivateMediaStorage'
else:
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.2/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
STATICFILES_DIRS = [os.path.join(BASE_DIR, "static"), ]
run below command
python manage.py collectstatic
and you are good to go
python manage.py test_all_services
python manage.py test_db
python manage.py test_cache
python manage.py test_email
python manage.py test_storage
python manage.py test_elasticsearch
python manage.py test_kafka
python manage.py test_rabbitmq
python manage.py test_celery
python manage.py test_flower
python manage.py test_harbor
python manage.py send_notifications
python manage.py health_check
python manage.py run_scheduled_tasks
python manage.py generate_report
python manage.py collect_metrics
python manage.py cleanup_old_data
python manage.py import_data
python manage.py export_data
Each repository contains an
update_readme.sh
script located in the
readme_manager
directory. This script is responsible for updating the README file in the repository by pulling in content from various sources.
The
update_readme.sh
script performs the following actions:
requirements.txt
,
readme_updater.py
, and
baseREADME.md
files from the
common_readme
repository.
requirements.txt
.
readme_updater.py
script to update the README file using
baseREADME.md
and other specified sources.
To run the
update_readme.sh
script, navigate to the
readme_manager
directory and execute the script:
cd readme_manager && ./update_readme.sh
This will update the
README.md
file in the root of the repository with the latest content from the specified sources.
If you need to make changes that are specific to the project or project-specific files, you might need to update the content of the partial README files. Here are the files that are included:
env.example
docker-compose.yml
Dockerfile
Jenkinsfile
Project-Specific Partial Files :
INTRODUCTION
:
../readme_manager/partials/introduction.md
DOC_AND_STACK
:
../readme_manager/partials/documentation_and_stack.md
TECHNOLOGY QNA
:
../readme_manager/partials/technology_qna.md
DEMO
:
../readme_manager/partials/demo.md
INSTALLATION
:
../readme_manager/partials/installation.md
DJANGO_COMMANDS
:
../readme_manager/partials/django_commands.md
NGINX_SERVER
:
../readme_manager/partials/nginx_server.md
These files are specific to the project and should be updated within the project repository.
common_readme
repository.
There are a few files which are common for all projects. For convenience, these are inside the
common_readme
repository so that if changes are made, they will be updated in all the projects' README files.
# Define a dictionary with the placeholders and their corresponding GitHub raw URLs or local paths
include_files = {
# common files
"README of Docker Installation": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/01-docker/docker_installation.md",
"DOCKER_END": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/01-docker/docker_end.md",
"README of Nginx Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/02-nginx/README.md",
"README of Jenkins Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/12-jenkins/Jenkins.md",
"README of PostgreSql Server With Nginx Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/03-postgres/README.md",
"README of PGAdmin4 Server With Nginx Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/06-pgadmin/README.md",
"README of Portainer Server With Nginx Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/05-portainer/README.md",
"README of Redis Server Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/04-redis/README.md",
"README of Redis Commander Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/07-redis_commander/README.md",
"README of Minio Server Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/08-minio/README.md",
"README of RabbitMQ Server Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/09-rabbitmq/README.md",
"README of Kafka Server Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/10-kafka/Kafka.md",
"README of AKHQ UI Setup": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/10-kafka/AKHQ.md",
"README of Intro": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/Intro.md",
"INSTALLATION ORDER": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/INSTALLATION_ORDER.md",
"HOME SERVER SETUP": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/README.md",
"SSH KEY SETUP": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/00-ssh-key-setup.md",
"HARDWARE PREPARATION": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/01-hardware-preparation.md",
"UBUNTU INSTALLATION": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/02-ubuntu-installation.md",
"INITIAL CONFIGURATION": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/03-initial-configuration.md",
"NETWORK CONFIGURATION": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/04-network-configuration.md",
"UPS CONFIGURATION": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/05-ups-configuration.md",
"BACKUP INTERNET": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/06-backup-internet.md",
"MONITORING SETUP": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/07-monitoring-setup.md",
"AUTOMATED BACKUPS": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/08-automated-backups.md",
"REMOTE ACCESS SETUP": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/09-remote-access.md",
"CORE SERVICES INSTALLATION": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/home_server/steps/10-core-services.md",
"SSH WEB TERMINAL SETUP": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/ssh-web-terminal/README.md",
"ROUTER ADMIN AIRTEL SETUP": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/airtel/README.md",
"README of Readme Manager": "https://raw.githubusercontent.com/arpansahu/common_readme/main/Readme%20manager/readme_manager.md",
"AWS DEPLOYMENT INTRODUCTION": "https://raw.githubusercontent.com/arpansahu/common_readme/main/Introduction/aws_desployment_introduction.md",
"STATIC_FILES": "https://raw.githubusercontent.com/arpansahu/common_readme/main/Introduction/static_files_settings.md",
"SENTRY": "https://raw.githubusercontent.com/arpansahu/common_readme/main/Introduction/sentry.md",
"CHANNELS": "https://raw.githubusercontent.com/arpansahu/common_readme/main/Introduction/channels.md",
"CACHE": "https://raw.githubusercontent.com/arpansahu/common_readme/main/Introduction/cache.md",
"README of Harbor" : "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/11-harbor/harbor.md",
"INCLUDE FILES": "https://raw.githubusercontent.com/arpansahu/common_readme/main/include_files.py",
"MONITORING": "https://raw.githubusercontent.com/arpansahu/arpansahu-one-scripts/main/README.md",
# kubernetes k3s (current production setup: k3s + Portainer, Nginx-managed HTTPS)
"KUBE DEPLOYMENT": "https://raw.githubusercontent.com/arpansahu/common_readme/main/AWS%20Deployment/kubernetes_k3s/README.md",
# project files
"env.example": "../env.example",
"docker-compose.yml": "../docker-compose.yml",
"Dockerfile": "../Dockerfile",
"Jenkinsfile-deploy": "../Jenkinsfile-deploy",
"Jenkinsfile-build": "../Jenkinsfile-build",
"DEPLOYMENT YAML": "../deployment.yaml",
"SERVICE YAML": "../service.yaml",
# project partials files
"INTRODUCTION": "../readme_manager/partials/introduction.md",
"INTRODUCTION MAIN": "../readme_manager/partials/introduction_main.md",
"DOC_AND_STACK": "../readme_manager/partials/documentation_and_stack.md",
"TECHNOLOGY QNA": "../readme_manager/partials/technology_qna.md",
"DEMO": "../readme_manager/partials/demo.md",
"INSTALLATION": "../readme_manager/partials/installation.md",
"DJANGO_COMMANDS": "../readme_manager/partials/django_commands.md",
"NGINX_SERVER": "../readme_manager/partials/nginx_server.md",
"SERVICES": "../readme_manager/partials/services.md",
"JENKINS PROJECT NAME": "../readme_manager/partials/jenkins_project_name.md",
"JENKINS BUILD PROJECT NAME": "../readme_manager/partials/jenkins_build_project_name.md",
"STATIC PROJECT NAME": "../readme_manager/partials/static_project_name.md",
"PROJECT_NAME_DASH" : "../readme_manager/partials/project_name_with_dash.md",
"PROJECT_DOCKER_PORT": "../readme_manager/partials/project_docker_port.md",
"PROJECT_NODE_PORT": "../readme_manager/partials/project_node_port.md",
"DOMAIN_NAME": "../readme_manager/partials/project_domain_name.md"
}
Also, remember if you want to include new files, you need to change the
baseREADME
file and the
include_files
array in the
common_readme
repository itself.
This project and all related services have evolved through multiple deployment strategies, each with unique advantages. This documentation covers all three approaches to provide flexibility based on your needs.
Phase 1: Heroku (Legacy)
- Initial hosting on Heroku
- Simple deployment but expensive at scale
- Limited control over infrastructure
Phase 2: EC2 + Home Server Hybrid (2022-2023)
- EC2 for portfolio (arpansahu.spacee) with Nginx
- Home Server for all other projects
- Nginx on EC2 forwarded traffic to Home Server
- Cost-effective but faced reliability challenges
Phase 3: Single EC2 Server (Aug 2023)
- Consolidated all projects to single EC2 instance
- Started with t2.medium (~$40/month)
- Optimized to t2.small (~$15/month)
- Better reliability, higher costs
Phase 4: Hostinger VPS (Jan 2024)
- Migrated to Hostinger VPS for cost optimization
- Better pricing than EC2
- Good balance of cost and reliability
Phase 5: Home Server (Current - 2026)
- Back to Home Server with improved setup
- Leveraging lessons learned from previous attempts
- Modern infrastructure with Kubernetes, proper monitoring
- Significant cost savings with better reliability measures
This documentation supports all three deployment strategies:
Advantages:
- High reliability (99.99% uptime SLA)
- Global infrastructure and CDN integration
- Scalable on demand
- Professional-grade monitoring and support
- No dependency on home internet/power
Disadvantages:
- Higher cost (~$15-40/month depending on instance)
- Ongoing monthly expenses
- Limited by instance size without additional cost
Best For:
- Production applications requiring maximum uptime
- Applications needing global reach
- When budget allows for convenience
- Business-critical services
Advantages:
- Cost-effective (~$8-12/month)
- Good performance for price
- Managed infrastructure options
- Reliable uptime
- Easy scaling
Disadvantages:
- Still recurring monthly cost
- Less control than EC2
- Limited to Hostinger's infrastructure
Best For:
- Budget-conscious deployments
- Personal projects requiring good uptime
- When you want managed services at lower cost
- Small to medium applications
Advantages:
-
Zero recurring costs
(only electricity)
- Full hardware control and unlimited resources
- Privacy and data sovereignty
- Learning opportunity for infrastructure management
- Can repurpose old laptops/desktops
- Ideal for development and testing
Disadvantages (and Mitigations):
-
ISP downtime
โ Use UPS + mobile hotspot backup
-
Power cuts
โ UPS with sufficient backup time
-
Weather issues
โ Redundant internet connection
-
Hardware failure
โ Regular backups, spare parts
-
Remote troubleshooting
โ Proper monitoring, remote access tools
-
Dynamic IP
โ Dynamic DNS services (afraid.org, No-IP)
Best For:
- Personal projects and portfolios
- Development and testing environments
- Learning DevOps and system administration
- When you have reliable power and internet
- Cost-sensitive deployments
Current Setup (February 2026):
Internet
โ
โโ arpansahu.space (Home Server with Static IP)
โ โ
โ โโ Nginx 1.18.0 (systemd) - TLS Termination & Reverse Proxy
โ โ
โ โโ System Services (systemd) - Core Infrastructure
โ โ โโ PostgreSQL 14.20 - Primary database
โ โ โโ Redis 6.0.16 - Cache and sessions
โ โ โโ RabbitMQ - Message broker for Celery
โ โ โโ Kafka 3.9.0 - Event streaming (KRaft mode)
โ โ โโ MinIO - S3-compatible object storage
โ โ โโ Jenkins 2.541.1 - CI/CD automation
โ โ โโ ElasticSearch - Search and logging
โ โ โโ K3s - Kubernetes for app orchestration
โ โ
โ โโ Docker Containers (Management UIs only)
โ โ โโ Portainer - Docker & K3s management
โ โ โโ PgAdmin - PostgreSQL admin interface
โ โ โโ Redis Commander - Redis admin interface
โ โ โโ Harbor - Private Docker registry
โ โ โโ AKHQ - Kafka management UI
โ โ โโ Kibana - ElasticSearch UI
โ โ โโ PMM - PostgreSQL monitoring
โ โ
โ โโ K3s Deployments (Django Applications)
โ โโ arpansahu.space
โ โโ borcelle.arpansahu.space
โ โโ chew.arpansahu.space
โ โโ django-starter.arpansahu.space
Why Hybrid Architecture?
This evolved architecture uses the best deployment method for each service type:
Lower resource usage
Docker for Management UIs
Quick rollback if updates fail
K3s for Applications
Performance Impact:
- Core services run 10-15% faster vs Docker
- Better memory utilization (no container overhead for heavy services)
- Easier to tune PostgreSQL, Redis performance parameters
- Direct disk I/O for databases and object storage
Lessons learned from 2022-2023 experience have been addressed:
Reliability Enhancements:
1. UPS with 2-4 hour backup capacity
2. Redundant internet (primary broadband + 4G backup)
3. Hardware RAID for data redundancy
4. Automated monitoring and alerting
5. Remote management tools (SSH, VPN)
6. Automated backup to cloud storage
Monitoring Stack:
- Uptime monitoring (UptimeRobot, Healthchecks.io)
- System monitoring (Prometheus + Grafana)
- Log aggregation (Loki)
- Alert notifications (Email, Telegram)
Infrastructure:
- Kubernetes (k3s) for orchestration
- Docker for containerization
- PM2 for process management
- Nginx for reverse proxy and HTTPS
- Automated deployments via Jenkins
| Feature | EC2 | Hostinger VPS | Home Server |
|---|---|---|---|
| Monthly Cost | $15-40 | $8-12 | ~$5 (electricity) |
| Uptime SLA | 99.99% | 99.9% | 95-98% (with improvements) |
| Setup Time | Medium | Easy | Complex |
| Scalability | Excellent | Good | Limited by hardware |
| Control | High | Medium | Full |
| Learning Value | Medium | Low | Very High |
| Remote Access | Built-in | Built-in | Requires setup |
| Backup | Easy | Easy | Manual setup needed |
| Privacy | Low | Medium | Complete |
For Production/Business:
- Use EC2 or Hostinger VPS
- Follow all documentation except home server specific sections
- Implement proper backup and disaster recovery
For Personal Projects:
- Home Server is ideal
- Follow complete documentation including home server setup
- Implement monitoring and backup strategies
For Learning:
- Home Server provides maximum learning opportunity
- Experiment with all services and configurations
- Break things and fix them safely
All deployment options use the same software stack:
Core Services:
- Docker Engine with docker-compose-plugin
- Nginx with wildcard SSL (acme.sh)
- Kubernetes (k3s) without Traefik
- Portainer for container management
Application Services:
- PostgreSQL 16 with SCRAM-SHA-256
- Redis for caching
- RabbitMQ for message queuing
- Kafka with KRaft mode for event streaming
- MinIO for object storage
- PgAdmin for database administration
- AKHQ for Kafka management
DevOps Tools:
- Jenkins for CI/CD
- Git for version control
- PM2 for process management
Monitoring (Home Server):
- System metrics and health checks
- Automated alerting
- Log aggregation
This repository provides step-by-step guides for:
For EC2/VPS Deployment:
1. Provision Ubuntu 22.04 server
2. Follow
Installation Order Guide
3. Install Docker and Docker Compose
4. Set up Nginx with HTTPS
5. Install required services in sequence
For Home Server Deployment:
1. Follow
Home Server Setup Guide
2. Install Ubuntu Server 22.04
3. Configure UPS and backup internet
4. Follow
Installation Order Guide
5. Set up monitoring and alerting
All projects are dockerized and run on predefined ports specified in Dockerfile and docker-compose.yml.
Historical Setup (2022-2023):
Single Server Setup (2023-2024):
Current Home Server Setup (2026):
- Updated architecture with Kubernetes
- Improved reliability and monitoring
- All services behind Nginx with HTTPS
- Dynamic DNS for domain management
As of January 2026, I'm running a home server setup with:
- Repurposed laptop as primary server
- Ubuntu 22.04 LTS Server
- 16GB RAM, 500GB SSD
- UPS backup power
- Dual internet connections (broadband + 4G)
- All services accessible via arpansahu.space
- Automated backups to cloud storage
Live projects: https://arpansahu.spacee/projects
Choose your deployment strategy and follow the relevant guides:
-
EC2/VPS
: Skip home server setup, start with Docker installation
-
Home Server
: Start with
Home Server Setup Guide
All guides are production-tested and follow the same format for consistency.
Note:
For complete setup guides:
-
Home Server
:
Home Server Setup Guide
-
Installation Order
:
Installation Order Guide
-
SSH Web Terminal
:
SSH Web Terminal Setup
-
Airtel Router Access
:
Airtel Router Admin Setup
Reference: https://docs.docker.com/engine/install/ubuntu/
Current Server Versions:
- Docker: 29.2.0 (February 2026)
- Docker Compose: v5.0.2 (plugin, not standalone)
sudo apt-get update
sudo apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Important: avoid GPG permission issues
sudo chmod a+r /etc/apt/keyrings/docker.gpg
๐น Why this matters:
Earlier READMEs often skippedchmod a+r, which now causes GPG errors on newer Ubuntu versions.
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
If you still see GPG errors:
sudo chmod a+r /etc/apt/keyrings/docker.gpg
sudo apt-get update
sudo apt-get install -y \
docker-ce \
docker-ce-cli \
containerd.io \
docker-compose-plugin
โ Change vs old README:
docker-compose-plugin
replaces old
docker-compose
binary
docker compose
(space) instead of
docker-compose
(hyphen)
sudo systemctl start docker
sudo systemctl enable docker
sudo docker run hello-world
โ If you see "Hello from Docker!" , Docker is installed correctly.
Verify versions:
docker --version
# Expected: Docker version 29.x or later
docker compose version
# Expected: Docker Compose version v5.x or later
Important:
Notice
docker compose
(with space), NOT
docker-compose
(with hyphen). The old
docker-compose
standalone binary is deprecated and not installed.
sudo usermod -aG docker $USER
newgrp docker
Verify:
docker ps
| Old Setup (Pre-2024) | Current Setup (2026) |
|---|---|
docker-compose
(hyphen)
|
docker compose
(space) -
plugin
|
| Docker v24.x | Docker v29.2.0 |
| Compose v2.23.x | Compose v5.0.2 |
| No key permission fix |
Explicit
chmod a+r docker.gpg
|
| Older install style | Keyring-based (required now) |
| Manual Compose install | Bundled via plugin |
Critical:
All docker-compose.yml files work with
docker compose
(space). Simply replace:
# Old way (deprecated):
docker-compose up -d
# New way (current):
docker compose up -d
# Start services
docker compose up -d
# Stop services
docker compose down
# View logs
docker compose logs -f
# Restart services
docker compose restart
# Pull latest images
docker compose pull
# Check status
docker compose ps
This means
docker-compose-plugin
is not installed. Install it:
sudo apt-get install docker-compose-plugin
All old
docker-compose
files are compatible with
docker compose
(plugin). No changes needed to YAML files, just change the command.
After Docker installation, you can install:
-
Portainer
- Docker management UI
-
PostgreSQL
- Database server
-
Redis
- Cache server
-
MinIO
- Object storage
-
Harbor
- Container registry
See INSTALLATION_ORDER.md for recommended sequence.
Now in your Git Repository
Create a file named Dockerfile with no extension and add following lines in it
Harbor is an open-source container image registry that secures images with role-based access control, scans images for vulnerabilities, and signs images as trusted. It extends Docker Distribution by adding enterprise features like security, identity management, and image replication. This guide provides a complete, production-ready setup with Nginx reverse proxy.
Before installing Harbor, ensure you have:
Internet (HTTPS)
โ
โโ Nginx (Port 443) - TLS Termination
โ
โโ harbor.arpansahu.space
โ
โโ Harbor Internal Nginx (localhost:8080)
โ
โโ Harbor Core
โโ Harbor Registry
โโ Harbor Portal (Web UI)
โโ Trivy (Vulnerability Scanner)
โโ Notary (Image Signing)
โโ ChartMuseum (Helm Charts)
Key Principles:
- Harbor runs on localhost only
- System Nginx handles all external TLS
- Harbor has its own internal Nginx
- All data persisted in Docker volumes
- Automatic restart via systemd
Advantages:
- Role-based access control (RBAC)
- Vulnerability scanning with Trivy
- Image signing and trust (Notary)
- Helm chart repository
- Image replication
- Garbage collection
- Web UI for management
- Docker Hub proxy cache
Use Cases:
- Private Docker registry for organization
- Secure image storage
- Vulnerability assessment
- Compliance and auditing
- Multi-project isolation
- Image lifecycle management
cd /opt
sudo wget https://github.com/goharbor/harbor/releases/download/v2.11.0/harbor-offline-installer-v2.11.0.tgz
Check for latest version at: https://github.com/goharbor/harbor/releases
sudo tar -xzvf harbor-offline-installer-v2.11.0.tgz
cd harbor
ls -la
Expected files:
- harbor.yml.tmpl
- install.sh
- prepare
- common.sh
- harbor.*.tar.gz (images)
sudo cp harbor.yml.tmpl harbor.yml
sudo nano harbor.yml
Configure essential settings
Find and modify these lines:
# Hostname for Harbor
hostname: harbor.arpansahu.space
# HTTP settings (used for internal communication)
http:
port: 8080
# HTTPS settings (disabled - Nginx handles this)
# Comment out or remove the https section completely
# https:
# port: 443
# certificate: /path/to/cert
# private_key: /path/to/key
# Harbor admin password
harbor_admin_password: YourStrongPasswordHere
# Database settings (PostgreSQL)
database:
password: ChangeDatabasePassword
max_idle_conns: 100
max_open_conns: 900
# Data volume location
data_volume: /data
# Trivy (vulnerability scanner)
trivy:
ignore_unfixed: false
skip_update: false
offline_scan: false
insecure: false
# Job service
jobservice:
max_job_workers: 10
# Notification webhook job
notification:
webhook_job_max_retry: 3
# Log settings
log:
level: info
local:
rotate_count: 50
rotate_size: 200M
location: /var/log/harbor
Important changes:
- Set `hostname` to your domain
- Set `http.port` to 8080 (internal)
- Comment out entire `https` section
- Change `harbor_admin_password`
- Change `database.password`
- Keep `data_volume: /data` for persistence
Save and exit
In nano:
Ctrl + O
,
Enter
,
Ctrl + X
sudo ./install.sh --with-notary --with-trivy --with-chartmuseum
This will:
- Load Harbor Docker images
- Generate docker-compose.yml
- Create necessary directories
- Start all Harbor services
Installation takes 5-10 minutes depending on system.
sudo docker compose ps
Expected services (all should be "Up"):
- harbor-core
- harbor-db (PostgreSQL)
- harbor-jobservice
- harbor-log
- harbor-portal (Web UI)
- nginx (Harbor's internal)
- redis
- registry
- registryctl
- trivy-adapter
- notary-server
- notary-signer
- chartmuseum
sudo docker compose logs -f
Press `Ctrl + C` to exit logs.
sudo nano /etc/nginx/sites-available/services
# Harbor Registry - HTTP โ HTTPS
server {
listen 80;
listen [::]:80;
server_name harbor.arpansahu.space;
return 301 https://$host$request_uri;
}
# Harbor Registry - HTTPS
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name harbor.arpansahu.space;
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
# Allow large image uploads (2GB recommended, 0 for unlimited)
# Note: Set to at least 2G for typical Docker images
client_max_body_size 2G;
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts for large image pushes
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
}
}
sudo nginx -t
sudo systemctl reload nginx
Harbor needs to start automatically after reboot. Docker Compose alone doesn't provide this.
sudo nano /etc/systemd/system/harbor.service
[Unit]
Description=Harbor Container Registry
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/opt/harbor
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable harbor
sudo systemctl status harbor
Expected: Loaded and active
# Allow HTTP/HTTPS (if not already allowed)
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
# Block direct access to Harbor port
sudo ufw deny 8080/tcp
# Reload firewall
sudo ufw reload
Configure router port forwarding
Access router admin: https://airtel.arpansahu.space (or http://192.168.1.1:81)
Add port forwarding rules:
| Service | External Port | Internal IP | Internal Port | Protocol |
|---|---|---|---|---|
| Harbor HTTP | 80 | 192.168.1.200 | 80 | TCP |
| Harbor HTTPS | 443 | 192.168.1.200 | 443 | TCP |
Note: Do NOT forward port 8080 (Harbor internal port).
sudo docker compose ps
All should show "Up" status.
curl -I http://127.0.0.1:8080
Expected: HTTP 200 or 301
curl -I https://harbor.arpansahu.space
Expected: HTTP 200
Access Harbor Web UI
Go to: https://harbor.arpansahu.space
Login with admin credentials
admin
Change admin password
Create project
library
(default) or custom name
Create robot account for CI/CD
ci-bot
docker login harbor.arpansahu.space
Enter:
- Username: `admin` (or your username)
- Password: (your Harbor password)
Expected: Login Succeeded
docker login harbor.arpansahu.space -u robot$ci-bot -p YOUR_ROBOT_TOKEN
docker tag nginx:latest harbor.arpansahu.space/library/nginx:latest
Format: `harbor.domain.com/project/image:tag`
docker push harbor.arpansahu.space/library/nginx:latest
Verify in Harbor UI
docker pull harbor.arpansahu.space/library/nginx:latest
services:
web:
image: harbor.arpansahu.space/library/nginx:latest
Retention policies automatically delete old images to save space.
Navigate to project
Add retention rule
Click: Add Rule
Configure:
-
Repositories
: matching
**
(all repositories)
-
By artifact count
: Retain the most recently pulled
3
artifacts
-
Tags
: matching
**
(all tags)
-
Untagged artifacts
: โ Checked (delete untagged)
This keeps last 3 pulled images and deletes others.
Schedule retention policy
Click: Add Retention Rule โ Schedule
Configure schedule:
-
Type
: Daily / Weekly / Monthly
-
Time
: 02:00 AM (off-peak)
-
Cron
:
0 2 * * *
(2 AM daily)
Click: Save
Test retention policy
Click: Dry Run
This shows what would be deleted without actually deleting.
Harbor uses Trivy to scan images for vulnerabilities.
Configure automatic scanning
Manual scan existing image
View scan results
Set CVE allowlist (optional)
sudo systemctl status harbor
sudo systemctl stop harbor
or
cd /opt/harbor
sudo docker compose down
sudo systemctl start harbor
or
cd /opt/harbor
sudo docker compose up -d
sudo systemctl restart harbor
cd /opt/harbor
sudo docker compose logs -f
sudo docker compose logs -f harbor-core
# Stop Harbor
sudo systemctl stop harbor
# Backup data directory
sudo tar -czf harbor-data-backup-$(date +%Y%m%d).tar.gz /data
# Backup configuration
sudo cp /opt/harbor/harbor.yml /backup/harbor-config-$(date +%Y%m%d).yml
# Backup database
sudo docker exec harbor-db pg_dumpall -U postgres > harbor-db-backup-$(date +%Y%m%d).sql
# Start Harbor
sudo systemctl start harbor
# Stop Harbor
sudo systemctl stop harbor
# Restore data directory
sudo tar -xzf harbor-data-backup-YYYYMMDD.tar.gz -C /
# Restore configuration
sudo cp /backup/harbor-config-YYYYMMDD.yml /opt/harbor/harbor.yml
# Restore database
sudo docker exec -i harbor-db psql -U postgres < harbor-db-backup-YYYYMMDD.sql
# Start Harbor
sudo systemctl start harbor
Harbor containers not starting
Cause: Port conflict or insufficient resources
Fix:
# Check if port 8080 is in use
sudo ss -tulnp | grep 8080
# Check Docker logs
cd /opt/harbor
sudo docker compose logs
# Check system resources
free -h
df -h
Cannot login to Harbor
Cause: Wrong credentials or database issue
Fix:
cd /opt/harbor
sudo docker compose exec harbor-core harbor-core password-reset
Image push fails
Cause: Storage full or permission issues
Fix:
# Check disk space
df -h /data
# Check Harbor logs
sudo docker compose logs -f registry
# Check data directory permissions
sudo ls -la /data
SSL certificate errors
Cause: Nginx certificate misconfigured
Fix:
# Verify certificate
openssl x509 -in /etc/nginx/ssl/arpansahu.space/fullchain.pem -noout -dates
# Check Nginx configuration
sudo nginx -t
# Reload Nginx
sudo systemctl reload nginx
Vulnerability scanning not working
Cause: Trivy adapter not running or internet connectivity
Fix:
# Check Trivy adapter
sudo docker compose ps trivy-adapter
# Check Trivy logs
sudo docker compose logs trivy-adapter
# Update Trivy database manually
sudo docker compose exec trivy-adapter /home/scanner/trivy --download-db-only
Use strong passwords
Enable HTTPS only
Implement RBAC
Enable vulnerability scanning
Configure image retention
Regular backups
# Automate with cron
sudo crontab -e
Add:
0 2 * * * /usr/local/bin/backup-harbor.sh
# Regular log review
sudo docker compose logs --since 24h | grep ERROR
Configure garbage collection
Optimize database
# Run vacuum on PostgreSQL
sudo docker compose exec harbor-db vacuumdb -U postgres -d registry
Configure resource limits
Edit docker-compose.yml (auto-generated):
services:
registry:
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 512M
Enable Redis cache
Harbor uses Redis by default for caching.
Increase Redis memory if needed.
curl -k https://harbor.arpansahu.space/api/v2.0/health
sudo docker stats
du -sh /data/*
sudo journalctl -u harbor -f
Backup current installation
Follow backup procedure above.
Download new Harbor version
cd /opt
sudo wget https://github.com/goharbor/harbor/releases/download/vX.Y.Z/harbor-offline-installer-vX.Y.Z.tgz
sudo systemctl stop harbor
sudo tar -xzvf harbor-offline-installer-vX.Y.Z.tgz
sudo mv harbor harbor-old
sudo mv harbor-new harbor
sudo cp harbor-old/harbor.yml harbor/harbor.yml
cd /opt/harbor
sudo ./install.sh --with-notary --with-trivy --with-chartmuseum
sudo systemctl start harbor
Run these commands to verify Harbor is working:
# Check all containers
sudo docker compose ps
# Check systemd service
sudo systemctl status harbor
# Check local access
curl -I http://127.0.0.1:8080
# Check HTTPS access
curl -I https://harbor.arpansahu.space
# Check Nginx config
sudo nginx -t
# Check firewall
sudo ufw status | grep -E '(80|443)'
# Test Docker login
docker login harbor.arpansahu.space
Then test in browser:
- Access: https://harbor.arpansahu.space
- Login with admin credentials
- Create test project
- Push test image
- Scan image for vulnerabilities
- Verify retention policy configured
After following this guide, you will have:
| Component | Value |
|---|---|
| Harbor URL | https://harbor.arpansahu.space |
| Internal Port | 8080 (localhost only) |
| Admin User | admin |
| Default Project | library |
| Data Directory | /data |
| Config File | /opt/harbor/harbor.yml |
| Service File | /etc/systemd/system/harbor.service |
Internet (HTTPS)
โ
โโ Nginx (TLS Termination)
โ [Wildcard Certificate: *.arpansahu.space]
โ
โโ harbor.arpansahu.space (Port 443 โ 8080)
โ
โโ Harbor Stack (Docker Compose)
โโ Harbor Core (API + Logic)
โโ Harbor Portal (Web UI)
โโ Registry (Image Storage)
โโ PostgreSQL (Metadata)
โโ Redis (Cache)
โโ Trivy (Vulnerability Scanner)
โโ Notary (Image Signing)
โโ ChartMuseum (Helm Charts)
Symptom:
Docker push fails with
413 Request Entity Too Large
when pushing large images.
Cause:
Nginx
client_max_body_size
limit is too small (default is 1MB).
Solution:
sudo nano /etc/nginx/sites-available/services
location / {
client_max_body_size 2G; # Adjust as needed
proxy_pass http://127.0.0.1:8080;
# ... rest of config
}
sudo nginx -t
sudo systemctl reload nginx
Note:
Harbor's internal nginx is already set to
client_max_body_size 0;
(unlimited) in its
/etc/nginx/nginx.conf
, so you only need to fix the external/system nginx configuration at
/etc/nginx/sites-available/services
.
Verify Harbor's internal nginx (optional):
docker exec nginx cat /etc/nginx/nginx.conf | grep client_max_body_size
# Should show: client_max_body_size 0;
Check these:
# 1. Is Harbor running?
sudo systemctl status harbor
docker ps | grep harbor
# 2. Is nginx running?
sudo systemctl status nginx
# 3. Check logs
sudo journalctl -u harbor -n 50
docker logs nginx
# Reset admin password
cd /opt/harbor
sudo docker-compose stop
sudo ./prepare
sudo docker-compose up -d
# Check disk usage
df -h /data
# Run garbage collection
docker exec harbor-core harbor-gc
# Or via UI: Administration โ Garbage Collection โ Run Now
Check nginx configuration for these settings:
proxy_buffering off;
proxy_request_buffering off;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
After setting up Harbor:
My Harbor instance: https://harbor.arpansahu.space
For CI/CD integration, see Jenkins documentation.
FROM python:3.10.7
WORKDIR /app
# Copy only requirements.txt first to leverage Docker cache
COPY requirements.txt .
# Install dependencies
RUN pip3 install --no-cache-dir -r requirements.txt
# Install supervisord
RUN apt-get update && apt-get install -y supervisor
# Copy the rest of the application
COPY . .
# Copy supervisord configuration file
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Expose necessary ports
EXPOSE 8016 8054
# Start supervisord to manage the processes
# Note: For Kubernetes deployments, migrations are handled by Jenkins (see Jenkinsfile-deploy)
# For Docker Compose deployments, migrations run here on container startup
CMD python manage.py migrate --noinput && \
echo "Running collectstatic..." && \
python manage.py collectstatic --noinput --verbosity 2 && \
echo "โ
Collectstatic completed successfully" && \
supervisord -c /etc/supervisor/conf.d/supervisord.conf
Create a file named docker-compose.yml and add following lines in it
services:
web:
build: # This section will be used when running locally
context: .
dockerfile: Dockerfile
image: harbor.arpansahu.space/library/django_starter:latest
env_file: ./.env
container_name: django_starter
# volumes:
# - .:/app # Only for local development, commented out for production deployment
ports:
- "8016:8016"
- "8054:8054"
restart: unless-stopped
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications.
Docker Compose define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment. It gets an app running in one command by just running docker-compose up. Docker compose uses the Dockerfile if you add the build command to your projectโs docker-compose.yml. Your Docker workflow should be to build a suitable Dockerfile for each image you wish to create, then use compose to assemble the images using the build command.
Running Docker
docker compose up --build --detach
--detach tag is for running the docker even if terminal is closed
if you remove this tag it will be attached to terminal, and you will be able to see the logs too
--build tag with docker compose up will force image to be rebuild every time before starting the container
Lightweight Kubernetes cluster using K3s with Portainer Agent for centralized management through your existing Portainer instance.
# 1. Copy files to server
scp -r kubernetes_k3s/ user@server:"AWS Deployment/"
# 2. SSH to server
ssh user@server
cd "AWS Deployment/kubernetes_k3s"
# 3. Create .env from example
cp .env.example .env
nano .env # Edit if needed
# 4. Install K3s
chmod +x install.sh
sudo ./install.sh
# 5. Deploy Portainer Agent
export KUBECONFIG=/home/$USER/.kube/config
kubectl apply -n portainer -f https://downloads.portainer.io/ce2-19/portainer-agent-k8s-nodeport.yaml
# 6. Get agent port
kubectl get svc -n portainer portainer-agent
# 7. Connect to Portainer
# Login to: https://portainer.arpansahu.space
# Go to: Environments โ Add Environment โ Agent
# Enter: <server-ip>:<nodeport>
.env.example
:
K3S_VERSION=stable
K3S_CLUSTER_NAME=arpansahu-k3s
PORTAINER_AGENT_NAMESPACE=portainer
PORTAINER_AGENT_PORT=9001
PORTAINER_URL=https://portainer.arpansahu.space
K3S_DATA_DIR=/var/lib/rancher/k3s
K3S_DISABLE_TRAEFIK=true
The
install.sh
script first installs kubectl if not already present:
- Downloads latest stable kubectl binary
- Installs to
/usr/local/bin/kubectl
- Skips if kubectl already exists
The
install.sh
script:
1. Installs K3s (lightweight Kubernetes)
2. Waits for cluster to be ready
3. Sets up kubeconfig for non-root user (
~/.kube/config
)
4. Creates portainer namespace
Deploy the agent manually after K3s installation:
# Set kubeconfig
export KUBECONFIG=/home/$USER/.kube/config
# Deploy agent
kubectl apply -n portainer -f https://downloads.portainer.io/ce2-19/portainer-agent-k8s-nodeport.yaml
# Verify deployment
kubectl get pods -n portainer
kubectl get svc -n portainer
# Get server IP
hostname -I | awk '{print $1}'
# Get NodePort
kubectl get svc -n portainer portainer-agent -o jsonpath='{.spec.ports[0].nodePort}'
# Example endpoint: 192.168.1.200:30778
K3s Cluster
192.168.1.200:30778
(use your IP and port)
# Check agent status
kubectl get pods -n portainer
# View agent logs
kubectl logs -n portainer -l app=portainer-agent
# Test connectivity
curl http://localhost:<nodeport>
# Create deployment
kubectl create deployment nginx --image=nginx:alpine
# Expose as service
kubectl expose deployment nginx --port=80 --type=NodePort
# Check resources
kubectl get all
kubectl get pods
kubectl get services
# Get service URL
kubectl get svc nginx -o jsonpath='{.spec.ports[0].nodePort}'
# Access: http://<server-ip>:<nodeport>
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-app
spec:
type: NodePort
selector:
app: my-app
ports:
- port: 80
targetPort: 80
nodePort: 30080
Apply:
kubectl apply -f deployment.yaml
# Cluster information
kubectl cluster-info
kubectl get nodes
# View resources
kubectl get all -A
kubectl get pods -A
kubectl get services -A
kubectl get namespaces
# Describe resources
kubectl describe pod <pod-name>
kubectl describe svc <service-name>
# Logs
kubectl logs <pod-name>
kubectl logs -f <pod-name> # Follow logs
kubectl logs <pod-name> --previous # Previous container logs
# Execute commands
kubectl exec -it <pod-name> -- /bin/sh
kubectl exec <pod-name> -- ls /app
# Port forwarding
kubectl port-forward pod/<pod-name> 8080:80
kubectl port-forward svc/<service-name> 8080:80
# Scale deployment
kubectl scale deployment <name> --replicas=3
# Update image
kubectl set image deployment/<name> container-name=new-image:tag
# Restart deployment
kubectl rollout restart deployment/<name>
# Rollout history
kubectl rollout history deployment/<name>
# Rollback
kubectl rollout undo deployment/<name>
# Delete resources
kubectl delete deployment <name>
kubectl delete service <name>
kubectl delete -f deployment.yaml
# List namespaces
kubectl get namespaces
# Create namespace
kubectl create namespace my-namespace
# Switch context to namespace
kubectl config set-context --current --namespace=my-namespace
# Delete namespace
kubectl delete namespace my-namespace
#!/bin/bash
# backup-k3s.sh
BACKUP_DIR="/backup/k3s/$(date +%Y%m%d_%H%M%S)"
mkdir -p "$BACKUP_DIR"
# Backup K3s data directory
sudo tar czf "$BACKUP_DIR/k3s-data.tar.gz" /var/lib/rancher/k3s
# Backup all Kubernetes resources
kubectl get all -A -o yaml > "$BACKUP_DIR/all-resources.yaml"
# Backup persistent volumes
kubectl get pv,pvc -A -o yaml > "$BACKUP_DIR/volumes.yaml"
# Backup namespaces and configs
kubectl get namespaces -o yaml > "$BACKUP_DIR/namespaces.yaml"
kubectl get configmaps -A -o yaml > "$BACKUP_DIR/configmaps.yaml"
kubectl get secrets -A -o yaml > "$BACKUP_DIR/secrets.yaml"
echo "Backup completed: $BACKUP_DIR"
#!/bin/bash
# restore-k3s.sh
BACKUP_DIR="/backup/k3s/20260201_100000"
# Stop K3s
sudo systemctl stop k3s
# Restore K3s data
sudo tar xzf "$BACKUP_DIR/k3s-data.tar.gz" -C /
# Start K3s
sudo systemctl start k3s
sleep 30
# Wait for cluster to be ready
until kubectl get nodes | grep -q "Ready"; do
echo "Waiting for cluster..."
sleep 5
done
# Restore resources
kubectl apply -f "$BACKUP_DIR/all-resources.yaml"
echo "Restore completed"
# Check K3s status
sudo systemctl status k3s
# View K3s logs
sudo journalctl -u k3s -n 100 --no-pager
sudo journalctl -u k3s -f # Follow logs
# Restart K3s
sudo systemctl restart k3s
# Check K3s version
k3s --version
# Check ports
sudo netstat -tlnp | grep -E '6443|10250'
# Check agent pod status
kubectl get pods -n portainer
# View agent logs
kubectl logs -n portainer -l app=portainer-agent
kubectl logs -n portainer -l app=portainer-agent -f # Follow
# Check agent service
kubectl get svc -n portainer
# Describe agent pod
kubectl describe pod -n portainer -l app=portainer-agent
# Test agent port
kubectl get svc -n portainer portainer-agent -o jsonpath='{.spec.ports[0].nodePort}'
curl http://localhost:<nodeport>
# Restart agent
kubectl rollout restart deployment -n portainer portainer-agent
# Check pod status
kubectl get pods -n <namespace>
# Describe pod (shows events)
kubectl describe pod <pod-name> -n <namespace>
# View pod logs
kubectl logs <pod-name> -n <namespace>
# Check events
kubectl get events -A --sort-by='.lastTimestamp'
# Check node resources
kubectl top nodes
kubectl describe nodes
# Check CoreDNS pods
kubectl get pods -n kube-system -l k8s-app=kube-dns
# Test DNS resolution
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default
# Check network pods
kubectl get pods -n kube-system
# Restart CoreDNS
kubectl rollout restart deployment -n kube-system coredns
# Check persistent volumes
kubectl get pv
kubectl get pvc -A
# Describe PVC
kubectl describe pvc <pvc-name> -n <namespace>
# Check disk space
df -h
du -sh /var/lib/rancher/k3s/*
# From Portainer server, test connection
telnet <k3s-server-ip> <nodeport>
curl http://<k3s-server-ip>:<nodeport>
# Check firewall
sudo ufw status
sudo ufw allow <nodeport>/tcp
# Check if agent is listening
sudo netstat -tlnp | grep <nodeport>
# Check resource usage
kubectl top nodes
kubectl top pods -A
# Check system resources
free -h
df -h
vmstat 5
# Check K3s resource limits
sudo cat /etc/systemd/system/k3s.service
# Complete uninstall
sudo /usr/local/bin/k3s-uninstall.sh
# Verify removal
which k3s
which kubectl
ls /var/lib/rancher/k3s
~/.kube/config
has proper permissions (600)
For issues:
1. Check
Troubleshooting
section
2. View K3s logs:
sudo journalctl -u k3s -f
3. View agent logs:
kubectl logs -n portainer -l app=portainer-agent
4.
K3s GitHub Issues
5.
Portainer Community Forums
K3s requires SSL certificates for TLS Ingress and secure pod communication (Java apps, Kafka, etc.). Certificates are automatically managed - see SSL Automation Documentation .
Automated Certificate Management:
- โ
K3s secrets updated after SSL renewal (arpansahu-tls, kafka-ssl-keystore)
- โ
Keystores stored in
/var/lib/rancher/k3s/ssl/keystores/
- โ
Uploaded to MinIO for Django projects
- โ
See
Django Integration Guide
Manual testing:
# On server
cd ~/k3s_scripts
./1_renew_k3s_ssl_keystores.sh # Update K3s secrets
./2_upload_keystores_to_minio.sh # Upload to MinIO
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
tls:
- hosts:
- app.arpansahu.space
secretName: arpansahu-tls # Auto-managed secret
rules:
- host: app.arpansahu.space
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka
spec:
template:
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:7.8.0
env:
- name: KAFKA_SSL_KEYSTORE_LOCATION
value: /etc/kafka/secrets/kafka.keystore.jks
- name: KAFKA_SSL_KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: kafka-ssl-keystore # Auto-managed secret
key: keystore-password
volumeMounts:
- name: kafka-ssl
mountPath: /etc/kafka/secrets
readOnly: true
volumes:
- name: kafka-ssl
secret:
secretName: kafka-ssl-keystore
# List secrets
kubectl get secrets
# Check certificate expiry
kubectl get secret arpansahu-tls -o jsonpath='{.data.tls\.crt}' | \
base64 -d | openssl x509 -noout -dates
# View keystore secret
kubectl describe secret kafka-ssl-keystore
Pods not using new certificates:
- Restart deployment:
kubectl rollout restart deployment/your-app
- K8s doesn't auto-reload secrets - manual restart required
Certificate verification failed:
- Check secret exists:
kubectl get secret arpansahu-tls
- Verify expiry date (see Monitoring above)
- Force renewal: See
SSL Automation
For complete automation details, troubleshooting, and manual procedures: SSL Automation Documentation
Nginx is a high-performance web server and reverse proxy used to route HTTPS traffic to all services.
/etc/nginx/sites-available/
/etc/nginx/sites-enabled/
/etc/nginx/ssl/arpansahu.space/
/var/log/nginx/
cd "AWS Deployment/nginx"
chmod +x install.sh
./install.sh
```bash file=install.sh
### SSL Certificate Installation
```bash file=install-ssl.sh
Prerequisites for SSL:
1. Namecheap account with API access enabled
2. Server IP whitelisted in Namecheap API settings
3. Environment variables set:
export NAMECHEAP_USERNAME="your_username"
export NAMECHEAP_API_KEY="your_api_key"
export NAMECHEAP_SOURCEIP="your_server_ip"
./install-ssl.sh
sudo apt update
sudo apt install -y nginx
sudo systemctl start nginx
sudo systemctl enable nginx
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw reload
Add A records to your DNS provider:
Type: A Record
Name: @
Value: YOUR_SERVER_IP
Type: A Record
Name: *
Value: YOUR_SERVER_IP
This allows all subdomains (*.arpansahu.space) to point to your server.
sudo nano /etc/nginx/sites-available/services
Add server blocks for each service (see individual service nginx configs).
sudo ln -sf /etc/nginx/sites-available/services /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
curl https://get.acme.sh | sh
source ~/.bashrc
acme.sh --set-default-ca --server letsencrypt
export NAMECHEAP_USERNAME="your_username"
export NAMECHEAP_API_KEY="your_api_key"
export NAMECHEAP_SOURCEIP="your_server_ip"
acme.sh --issue \
--dns dns_namecheap \
-d arpansahu.space \
-d "*.arpansahu.space" \
--server letsencrypt
sudo mkdir -p /etc/nginx/ssl/arpansahu.space
acme.sh --install-cert \
-d arpansahu.space \
-d "*.arpansahu.space" \
--key-file /etc/nginx/ssl/arpansahu.space/privkey.pem \
--fullchain-file /etc/nginx/ssl/arpansahu.space/fullchain.pem \
--reloadcmd "systemctl reload nginx"
crontab -e
Add:
0 0 * * * ~/.acme.sh/acme.sh --cron --home ~/.acme.sh > /dev/null
Each service has its own nginx config with this pattern:
# HTTP to HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name service.arpansahu.space;
return 301 https://$host$request_uri;
}
# HTTPS server block
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name service.arpansahu.space;
# SSL Configuration
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# Proxy to backend service
location / {
proxy_pass http://127.0.0.1:PORT;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
| Service | Domain | Backend Port |
|---|---|---|
| Harbor | harbor.arpansahu.space | 8888 |
| RabbitMQ | rabbitmq.arpansahu.space | 15672 |
| PgAdmin | pgadmin.arpansahu.space | 5050 |
| SSH Terminal | ssh.arpansahu.space | 8084 |
| Jenkins | jenkins.arpansahu.space | 8080 |
| Portainer | portainer.arpansahu.space | 9443 |
| Redis (stream) | redis.arpansahu.space | 6380 (TCP) |
Test configuration:
sudo nginx -t
Reload (no downtime):
sudo systemctl reload nginx
Restart:
sudo systemctl restart nginx
View status:
sudo systemctl status nginx
View logs:
# Access logs
sudo tail -f /var/log/nginx/access.log
# Error logs
sudo tail -f /var/log/nginx/error.log
# Service-specific
sudo tail -f /var/log/nginx/services.access.log
Check active connections:
sudo ss -tuln | grep -E ':80|:443'
List enabled sites:
ls -la /etc/nginx/sites-enabled/
Redis requires TCP stream instead of HTTP proxy:
stream {
upstream redis_backend {
server 127.0.0.1:6380;
}
server {
listen 6379 ssl;
proxy_pass redis_backend;
proxy_connect_timeout 1s;
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
}
}
This goes in
/etc/nginx/nginx.conf
at the root level (outside http block).
502 Bad Gateway:
# Check backend service is running
sudo ss -tuln | grep PORT
# Check nginx can connect
curl http://127.0.0.1:PORT
# Check logs
sudo tail -f /var/log/nginx/error.log
Certificate errors:
# Check certificate files exist
ls -la /etc/nginx/ssl/arpansahu.space/
# Check certificate validity
openssl x509 -in /etc/nginx/ssl/arpansahu.space/fullchain.pem -text -noout
# Check acme.sh status
acme.sh --list
Configuration not loading:
# Test syntax
sudo nginx -t
# Check enabled sites
ls -la /etc/nginx/sites-enabled/
# Reload nginx
sudo systemctl reload nginx
Port already in use:
# Find what's using port 80/443
sudo ss -tuln | grep -E ':80|:443'
sudo lsof -i :80
server_tokens off;
listen 443 ssl http2;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req zone=general burst=20 nodelay;
acme.sh automatically renews certificates via cron. To manually renew:
acme.sh --renew -d arpansahu.space -d "*.arpansahu.space" --force
Check renewal log:
cat ~/.acme.sh/arpansahu.space/arpansahu.space.log
Complete SSL automation is now centralized. See SSL Automation Documentation for:
Quick verification:
# Check certificate expiry
openssl x509 -in /etc/nginx/ssl/arpansahu.space/fullchain.pem -noout -dates
# Test automation
ssh arpansahu@arpansahu.space '~/deploy_certs.sh'
# Backup nginx configs
sudo tar -czf nginx-backup-$(date +%Y%m%d).tar.gz \
/etc/nginx/sites-available/ \
/etc/nginx/sites-enabled/ \
/etc/nginx/nginx.conf \
/etc/nginx/ssl/
# Backup SSL certificates
tar -czf ssl-backup-$(date +%Y%m%d).tar.gz ~/.acme.sh/
Internet (Client)
โ
โผ
[ Nginx - Port 443 (SSL/TLS Termination) ]
โ
โโโโถ Harbor (8888)
โโโโถ RabbitMQ (15672)
โโโโถ PgAdmin (5050)
โโโโถ SSH Terminal (8084)
โโโโถ Jenkins (8080)
โโโโถ Portainer (9443)
Key Points:
- Nginx handles all SSL/TLS
- Backend services run on localhost (secure)
- Single wildcard certificate covers all subdomains
- Automatic certificate renewal
- Zero downtime reloads
install.sh
install-ssl.sh
/etc/nginx/nginx.conf
/etc/nginx/sites-available/
/etc/nginx/ssl/arpansahu.space/
# /etc/nginx/nginx.conf
worker_processes auto;
worker_connections 1024;
# Enable gzip
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_types text/plain text/css application/json application/javascript;
# Buffer sizes
client_body_buffer_size 128k;
client_max_body_size 500M;
# Active connections
sudo ss -s
# Request rate
sudo tail -f /var/log/nginx/access.log | pv -l -i1 -r > /dev/null
# Error rate
sudo grep error /var/log/nginx/error.log | tail -20
After all these steps your Nginx configuration file located at /etc/nginx/sites-available/django_starter will be looking similar to this
# ================= SERVICE PROXY TEMPLATE =================
# HTTP โ HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name django-starter.arpansahu.space;
return 301 https://$host$request_uri;
}
# HTTPS reverse proxy
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name django-starter.arpansahu.space;
# ๐ Wildcard SSL (acme.sh + Namecheap DNS-01)
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
location / {
proxy_pass http://0.0.0.0:8016;
proxy_http_version 1.1;
# Required headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
# WebSocket support (safe for all services)
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Jenkins is an open-source automation server that enables developers to build, test, and deploy applications through continuous integration and continuous delivery (CI/CD). This guide provides a complete, production-ready setup with Java 21, Jenkins LTS, Nginx reverse proxy, and comprehensive credential management.
Before installing Jenkins, ensure you have:
Internet (HTTPS)
โ
โโ Nginx (Port 443) - TLS Termination
โ
โโ jenkins.arpansahu.space
โ
โโ Jenkins (localhost:8080)
โ
โโ Jenkins Controller (Web UI + API)
โโ Build Agents (local/remote)
โโ Workspace (/var/lib/jenkins)
โโ Credentials Store
Key Principles:
- Jenkins runs on localhost only (port 8080)
- Nginx handles all TLS termination
- Credentials stored in Jenkins encrypted store
- Pipelines defined as code (Jenkinsfile)
- Docker-based builds for isolation
Advantages:
- Open-source and free
- Extensive plugin ecosystem (1800+)
- Pipeline as Code (Jenkinsfile)
- Distributed builds
- Docker integration
- GitHub/GitLab integration
- Email notifications
- Role-based access control
Use Cases:
- Automated builds on commit
- Automated testing
- Docker image building
- Deployment automation
- Scheduled jobs
- Integration with Harbor registry
- Multi-branch pipelines
Jenkins requires Java to run. We'll install OpenJDK 21 (latest LTS).
โ ๏ธ Important: Java 17 support ends March 31, 2026. Use Java 21 for continued support.
java -version
If you see Java 17 or older, follow the upgrade steps below.
If Jenkins is already installed on Java 17:
sudo apt update
sudo apt install -y openjdk-21-jdk
sudo systemctl status jenkins
sudo systemctl stop jenkins
sudo update-alternatives --config java
Select Java 21 from the list (e.g., `/usr/lib/jvm/java-21-openjdk-amd64/bin/java`)
java -version
Should show: `openjdk version "21.0.x"`
sudo nano /etc/default/jenkins
Add or update:
JAVA_HOME="/usr/lib/jvm/java-21-openjdk-amd64"
JENKINS_JAVA_CMD="$JAVA_HOME/bin/java"
sudo systemctl start jenkins
sudo systemctl status jenkins
Verify in Jenkins UI
Dashboard โ Manage Jenkins โ System Information โ Look for
java.version
(should be 21.x)
For new installations:
sudo apt update
sudo apt install -y openjdk-21-jdk
java -version
Expected output:
openjdk version "21.0.x" 2024-xx-xx
OpenJDK Runtime Environment (build 21.0.x+x)
OpenJDK 64-Bit Server VM (build 21.0.x+x, mixed mode, sharing)
sudo nano /etc/environment
Add:
JAVA_HOME="/usr/lib/jvm/java-21-openjdk-amd64"
Apply changes:
source /etc/environment
echo $JAVA_HOME
Jenkins Long-Term Support (LTS) releases are recommended for production environments. Current LTS: 2.528.3
# Modern keyring format (recommended)
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo gpg --dearmor -o /usr/share/keyrings/jenkins-archive-keyring.gpg
# Also add legacy key for repository compatibility
gpg --keyserver keyserver.ubuntu.com --recv-keys 7198F4B714ABFC68
gpg --export 7198F4B714ABFC68 > /tmp/jenkins-key.gpg
sudo gpg --dearmor < /tmp/jenkins-key.gpg > /usr/share/keyrings/jenkins-old-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/jenkins-old-keyring.gpg] https://pkg.jenkins.io/debian-stable binary/" | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt update
# Install latest LTS version
sudo apt install -y jenkins
# Or install specific LTS version
# sudo apt install -y jenkins=2.528.3
jenkins --version
Expected: `2.528.3` or newer LTS
sudo systemctl enable jenkins
sudo systemctl start jenkins
sudo systemctl status jenkins
Expected: Active (running)
sudo ss -tulnp | grep 8080
Expected: Jenkins listening on 127.0.0.1:8080
To upgrade an existing Jenkins installation:
jenkins --version
# Or via API:
curl -s -I https://jenkins.arpansahu.space/api/json | grep X-Jenkins
apt-cache policy jenkins | head -30
Note: Look for versions 2.xxx.x (LTS releases), not 2.5xx+ (weekly releases)
sudo tar -czf /tmp/jenkins-backup-$(date +%Y%m%d-%H%M%S).tar.gz /var/lib/jenkins/
sudo systemctl stop jenkins
sudo apt update
sudo apt install --only-upgrade jenkins -y
# Or install specific LTS version:
# sudo apt install jenkins=2.528.3 -y
sudo systemctl start jenkins
jenkins --version
sudo systemctl status jenkins
Check Jenkins UI
https://jenkins.arpansahu.space โ Manage Jenkins โ About Jenkins
sudo nano /etc/nginx/sites-available/services
# Jenkins CI/CD - HTTP โ HTTPS
server {
listen 80;
listen [::]:80;
server_name jenkins.arpansahu.space;
return 301 https://$host$request_uri;
}
# Jenkins CI/CD - HTTPS
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name jenkins.arpansahu.space;
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# Jenkins-specific timeouts
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
# Required for Jenkins CLI and agent connections
proxy_http_version 1.1;
proxy_request_buffering off;
}
}
sudo nginx -t
sudo systemctl reload nginx
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Copy this password (example: a1b2c3d4e5f6...)
Access Jenkins Web UI
Go to: https://jenkins.arpansahu.space
Enter initial admin password
Paste the password from step 1.
Install suggested plugins
Create admin user
Configure:
- Username:
admin
- Password: (your strong password)
- Full name:
Admin User
- Email: your-email@example.com
Click: Save and Continue
Configure Jenkins URL
Jenkins URL:
https://jenkins.arpansahu.space
Click: Save and Finish
Start using Jenkins
Click: Start using Jenkins
Jenkins stores credentials securely for use in pipelines. We'll configure 4 essential credentials.
Navigate to credentials
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure GitHub credentials
arpansahu
(your GitHub username)
ghp_xxxxxxxxxxxx
(GitHub Personal Access Token)
github-auth
Github Auth
Click: Create
Note: Generate GitHub PAT at https://github.com/settings/tokens with scopes: repo, admin:repo_hook
Add Harbor credentials
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure Harbor credentials
admin
(or robot account:
robot$ci-bot
)
harbor-credentials
harbor-credentials
Click: Create
Add Jenkins admin credentials
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure Jenkins API credentials
admin
(Jenkins admin username)
jenkins-admin-credentials
Jenkins admin credentials for API authentication and pipeline usage
Click: Create
Use case: Pipeline triggers, REST API calls, remote job execution
Add Sentry CLI token
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure Sentry credentials
sentry-auth-token
Sentry CLI Authentication Token
Click: Create
Use case: Sentry release tracking, source map uploads, error monitoring integration
Add GitHub credentials
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure GitHub credentials
github_auth
GitHub authentication for branch merging and repository operations
Click: Create
How to generate GitHub PAT:
1. Go to GitHub โ Settings โ Developer settings โ Personal access tokens โ Tokens (classic)
2. Generate new token with permissions:
repo
(Full control of private repositories)
3. Copy token immediately (shown only once)
Use case: Automated branch merging, repository operations, deployment workflows
Global variables are available to all Jenkins pipelines.
Navigate to system configuration
Dashboard โ Manage Jenkins โ System
Scroll to Global properties
Check: Environment variables
Add global variables
Click: Add (for each variable)
| Name | Value | Description |
|---|---|---|
| MAIL_JET_API_KEY | (your Mailjet API key) | Email notification service |
| MAIL_JET_API_SECRET | (your Mailjet secret) | Email notification service |
| MAIL_JET_EMAIL_ADDRESS | noreply@arpansahu.space | Sender email address |
| MY_EMAIL_ADDRESS | your-email@example.com | Notification recipient |
Save configuration
Scroll down and click: Save
Jenkins needs Docker access to build containerized applications.
sudo usermod -aG docker jenkins
sudo systemctl restart jenkins
sudo -u jenkins docker ps
Expected: Docker container list (even if empty)
Required if pipelines need to copy files from protected directories.
sudo visudo
Add Jenkins sudo permissions
Add at end of file:
# Allow Jenkins to run specific commands without password
jenkins ALL=(ALL) NOPASSWD: /bin/cp, /bin/mkdir, /bin/chown
Or for full sudo access (less secure):
jenkins ALL=(ALL) NOPASSWD: ALL
Save and exit
In nano:
Ctrl + O
,
Enter
,
Ctrl + X
In vi:
Esc
,
:wq
,
Enter
Verify sudo access
sudo -u jenkins sudo -l
Each project needs its own Nginx configuration for deployment.
sudo nano /etc/nginx/sites-available/my-django-app
# Django App - HTTP โ HTTPS
server {
listen 80;
listen [::]:80;
server_name myapp.arpansahu.space;
return 301 https://$host$request_uri;
}
# Django App - HTTPS
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name myapp.arpansahu.space;
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
For Kubernetes deployment (alternative)
Replace
proxy_pass
line:
proxy_pass http://<CLUSTER_IP>:30080;
sudo ln -s /etc/nginx/sites-available/my-django-app /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Create
Jenkinsfile-build
in your project repository root.
Example Jenkinsfile-build:
pipeline {
agent { label 'local' }
environment {
HARBOR_URL = 'harbor.arpansahu.space'
HARBOR_PROJECT = 'library'
IMAGE_NAME = 'my-django-app'
IMAGE_TAG = "${env.BUILD_NUMBER}"
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build Docker Image') {
steps {
script {
docker.build("${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}")
}
}
}
stage('Push to Harbor') {
steps {
script {
docker.withRegistry("https://${HARBOR_URL}", 'harbor-credentials') {
docker.image("${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}").push()
docker.image("${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}").push('latest')
}
}
}
}
stage('Trigger Deploy') {
steps {
build job: 'my-django-app-deploy', wait: false
}
}
}
post {
success {
emailext(
subject: "Build Success: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: "Build completed successfully.",
to: "${env.MY_EMAIL_ADDRESS}"
)
}
failure {
emailext(
subject: "Build Failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: "Build failed. Check Jenkins console output.",
to: "${env.MY_EMAIL_ADDRESS}"
)
}
}
}
Create
Jenkinsfile-deploy
in your project repository root.
Example Jenkinsfile-deploy:
pipeline {
agent { label 'local' }
environment {
HARBOR_URL = 'harbor.arpansahu.space'
HARBOR_PROJECT = 'library'
IMAGE_NAME = 'my-django-app'
CONTAINER_NAME = 'my-django-app'
CONTAINER_PORT = '8000'
}
stages {
stage('Stop Old Container') {
steps {
script {
sh """
docker stop ${CONTAINER_NAME} || true
docker rm ${CONTAINER_NAME} || true
"""
}
}
}
stage('Pull Latest Image') {
steps {
script {
docker.withRegistry("https://${HARBOR_URL}", 'harbor-credentials') {
docker.image("${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:latest").pull()
}
}
}
}
stage('Deploy Container') {
steps {
script {
sh """
docker run -d \
--name ${CONTAINER_NAME} \
--restart unless-stopped \
-p ${CONTAINER_PORT}:8000 \
--env-file /var/lib/jenkins/.env/${IMAGE_NAME} \
${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:latest
"""
}
}
}
stage('Health Check') {
steps {
script {
sleep(time: 10, unit: 'SECONDS')
sh "curl -f http://localhost:${CONTAINER_PORT}/health || exit 1"
}
}
}
}
post {
success {
emailext(
subject: "Deploy Success: ${env.JOB_NAME}",
body: "Deployment completed successfully.",
to: "${env.MY_EMAIL_ADDRESS}"
)
}
failure {
emailext(
subject: "Deploy Failed: ${env.JOB_NAME}",
body: "Deployment failed. Check Jenkins console output.",
to: "${env.MY_EMAIL_ADDRESS}"
)
}
}
}
Create new pipeline
Dashboard โ New Item
Configure pipeline
my-django-app-build
Configure pipeline settings
Configure Pipeline definition
https://github.com/arpansahu/my-django-app.git
github-auth
*/build
Jenkinsfile-build
Save pipeline
Click: Save
Create new pipeline
Dashboard โ New Item
Configure pipeline
my-django-app-deploy
Configure pipeline settings
Configure Pipeline definition
https://github.com/arpansahu/my-django-app.git
github-auth
*/main
Jenkinsfile-deploy
Save pipeline
Click: Save
Store sensitive environment variables outside the repository.
sudo mkdir -p /var/lib/jenkins/.env
sudo chown jenkins:jenkins /var/lib/jenkins/.env
sudo nano /var/lib/jenkins/.env/my-django-app
# Django settings
SECRET_KEY=your-secret-key-here
DEBUG=False
ALLOWED_HOSTS=myapp.arpansahu.space
# Database
DATABASE_URL=postgresql://user:pass@db:5432/myapp
# Redis
REDIS_URL=redis://redis:6379/0
# Email
EMAIL_BACKEND=django.core.mail.backends.smtp.EmailBackend
EMAIL_HOST=smtp.mailjet.com
EMAIL_PORT=587
EMAIL_USE_TLS=True
EMAIL_HOST_USER=your-mailjet-api-key
EMAIL_HOST_PASSWORD=your-mailjet-secret
# Sentry
SENTRY_DSN=https://xxx@sentry.io/xxx
sudo chown jenkins:jenkins /var/lib/jenkins/.env/my-django-app
sudo chmod 600 /var/lib/jenkins/.env/my-django-app
Install Email Extension Plugin
Dashboard โ Manage Jenkins โ Plugins โ Available plugins
Search:
Email Extension Plugin
Click: Install
Configure SMTP settings
Dashboard โ Manage Jenkins โ System โ Extended E-mail Notification
Configure:
-
SMTP server
:
in-v3.mailjet.com
-
SMTP port
:
587
-
Use SMTP Authentication
: โ Checked
-
User Name
:
${MAIL_JET_API_KEY}
-
Password
:
${MAIL_JET_API_SECRET}
-
Use TLS
: โ Checked
-
Default user e-mail suffix
:
@arpansahu.space
Test email configuration
Click: Test configuration by sending test e-mail
Enter:
${MY_EMAIL_ADDRESS}
Expected: Email received
Save configuration
Click: Save
sudo systemctl status jenkins
sudo systemctl stop jenkins
sudo systemctl start jenkins
sudo systemctl restart jenkins
sudo journalctl -u jenkins -f
sudo tail -f /var/log/jenkins/jenkins.log
# Stop Jenkins
sudo systemctl stop jenkins
# Backup Jenkins home
sudo tar -czf jenkins-backup-$(date +%Y%m%d).tar.gz /var/lib/jenkins
# Start Jenkins
sudo systemctl start jenkins
sudo tar -czf jenkins-config-backup-$(date +%Y%m%d).tar.gz \
/var/lib/jenkins/config.xml \
/var/lib/jenkins/jobs/ \
/var/lib/jenkins/users/ \
/var/lib/jenkins/credentials.xml \
/var/lib/jenkins/secrets/
# Stop Jenkins
sudo systemctl stop jenkins
# Restore backup
sudo tar -xzf jenkins-backup-YYYYMMDD.tar.gz -C /
# Set ownership
sudo chown -R jenkins:jenkins /var/lib/jenkins
# Start Jenkins
sudo systemctl start jenkins
Jenkins not starting
Cause: Java not found or port conflict
Fix:
# Check Java installation
java -version
# Check if port 8080 is in use
sudo ss -tulnp | grep 8080
# Check Jenkins logs
sudo journalctl -u jenkins -n 50
Cannot push to Harbor from Jenkins
Cause: Docker credentials or network issue
Fix:
# Test Docker login as Jenkins user
sudo -u jenkins docker login harbor.arpansahu.space
# Check Jenkins can reach Harbor
sudo -u jenkins curl -I https://harbor.arpansahu.space
Pipeline fails with permission denied
Cause: Jenkins doesn't have Docker access
Fix:
# Add Jenkins to Docker group
sudo usermod -aG docker jenkins
# Restart Jenkins
sudo systemctl restart jenkins
# Verify
sudo -u jenkins docker ps
Email notifications not working
Cause: SMTP configuration incorrect
Fix:
GitHub webhook not triggering builds
Cause: Webhook not configured or firewall blocking
Fix:
# Verify Jenkins is accessible from internet
curl -I https://jenkins.arpansahu.space
# Configure GitHub webhook
# Repository โ Settings โ Webhooks โ Add webhook
# Payload URL: https://jenkins.arpansahu.space/github-webhook/
# Content type: application/json
# Events: Just the push event
Use HTTPS only
Strong authentication
# Enable security realm
Dashboard โ Manage Jenkins โ Security โ Security Realm
Select: Jenkins' own user database
Enable CSRF protection
Dashboard โ Manage Jenkins โ Security โ CSRF Protection
Check: Enable CSRF Protection
Limit build agent connections
Dashboard โ Manage Jenkins โ Security โ Agents
Set: Fixed port (50000) or disable
Use credentials store
Regular updates
# Check for Jenkins updates
Dashboard โ Manage Jenkins โ System Information
# Update Jenkins
sudo apt update
sudo apt upgrade jenkins
# Automate with cron
sudo crontab -e
Add:
0 2 * * * /usr/local/bin/backup-jenkins.sh
sudo nano /etc/default/jenkins
Add/modify:
JAVA_ARGS="-Xmx2048m -Xms1024m"
Restart Jenkins:
sudo systemctl restart jenkins
Clean old builds
Configure in project:
- Discard old builds
- Keep max 10 builds
- Keep builds for 7 days
Use build agents
Distribute builds across multiple machines instead of building everything on controller.
Check Jenkins system info
Dashboard โ Manage Jenkins โ System Information
Monitor disk usage
du -sh /var/lib/jenkins/*
Monitor build queue
Dashboard โ Build Queue (left sidebar)
View build history
Dashboard โ Build History (left sidebar)
Run these commands to verify Jenkins is working:
# Check Jenkins service
sudo systemctl status jenkins
# Check Java version
java -version
# Check port binding
sudo ss -tulnp | grep 8080
# Check Nginx config
sudo nginx -t
# Test HTTPS access
curl -I https://jenkins.arpansahu.space
# Verify Docker access
sudo -u jenkins docker ps
Then test in browser:
- Access: https://jenkins.arpansahu.space
- Login with admin credentials
- Verify all 4 credentials exist
- Create test pipeline
- Run manual build
- Check email notification received
After following this guide, you will have:
| Component | Value |
|---|---|
| Jenkins URL | https://jenkins.arpansahu.space |
| Jenkins Port | 8080 (localhost only) |
| Jenkins Home | /var/lib/jenkins |
| Java Version | OpenJDK 21 |
| Admin User | admin |
| Nginx Config | /etc/nginx/sites-available/services |
Internet (HTTPS)
โ
โโ Nginx (TLS Termination)
โ [Wildcard Certificate: *.arpansahu.space]
โ
โโ jenkins.arpansahu.space (Port 443 โ 8080)
โ
โโ Jenkins Controller
โ
โโ Credentials Store
โ โโ github-auth
โ โโ harbor-credentials
โ โโ jenkins-admin-credentials
โ โโ sentry-auth-token
โ
โโ Build Pipelines
โ โโ Jenkinsfile-build (Docker build + push)
โ โโ Jenkinsfile-deploy (Docker deploy)
โ
โโ Integration
โโ GitHub (webhooks)
โโ Harbor (registry)
โโ Docker (builds)
โโ Mailjet (notifications)
โโ Sentry (error tracking)
After setting up Jenkins:
My Jenkins instance: https://jenkins.arpansahu.space
For Harbor integration, see harbor.md documentation.
PostgreSQL is a powerful, open-source relational database system. This setup installs PostgreSQL as a native system service with remote access enabled.
| Test File | Where to Run | Connection Type | Purpose |
|---|---|---|---|
test_postgres_localhost.py
|
On Server | localhost:5432 | Test PostgreSQL on server without TLS |
test_postgres_mac.sh
|
From Mac | 192.168.1.200:5432 | Test PostgreSQL from Mac (direct IP, no TLS) |
test_postgres_domain_tls.sh
|
From Mac | postgres.arpansahu.space:9552 | Test PostgreSQL from Mac with TLS via domain |
Quick Test Commands:
# On Server (localhost)
python3 test_postgres_localhost.py
# From Mac (direct IP, no TLS)
PG_PASSWORD=your_password ./test_postgres_mac.sh
# From Mac (domain with TLS)
PG_PASSWORD=your_password ./test_postgres_domain_tls.sh
CLI Testing (psql):
# On Server (localhost) - As postgres superuser
sudo -u postgres psql -c "SELECT version();"
sudo -u postgres psql -c "\l" # List databases
# On Server (localhost) - With password auth
psql -h localhost -U postgres -d postgres -c "SELECT current_database(), current_user, version();"
# From Mac (direct IP, no TLS)
export PGPASSWORD=your_password
psql -h 192.168.1.200 -p 5432 -U postgres -d postgres -c "SELECT version();"
# From Mac (domain with TLS)
export PGPASSWORD=your_password
psql "host=postgres.arpansahu.space port=9552 user=postgres dbname=postgres sslmode=require" -c "SELECT version();"
# Interactive connection with TLS
psql "host=postgres.arpansahu.space port=9552 user=postgres dbname=postgres sslmode=require"
First, create the environment configuration file that will store your PostgreSQL password.
Create
.env.example
(Template file):
# PostgreSQL Configuration
POSTGRES_PASSWORD=your_secure_password_here
Create your actual
.env
file:
cd "AWS Deployment/Postgres"
cp .env.example .env
nano .env
Your
.env
file should look like this (with your actual password):
# PostgreSQL Configuration
POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
โ ๏ธ Important:
- Always use a strong password in production!
- Never commit your
.env
file to version control
- Keep the
.env.example
file as a template
The
install.sh
script installs PostgreSQL and configures it for remote access.
Content of
install.sh
:
#!/bin/bash
set -e
echo "=== PostgreSQL Installation Script ==="
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Load environment variables
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ -f "$SCRIPT_DIR/.env" ]; then
export $(grep -v '^#' "$SCRIPT_DIR/.env" | xargs)
echo -e "${GREEN}Loaded configuration from .env file${NC}"
else
echo -e "${YELLOW}Warning: .env file not found. Using defaults.${NC}"
echo -e "${YELLOW}Please copy .env.example to .env and configure it.${NC}"
fi
# Configuration with defaults
POSTGRES_PASSWORD="${POSTGRES_PASSWORD:-changeme}"
echo -e "${YELLOW}Step 1: Installing PostgreSQL${NC}"
sudo apt update
sudo apt install -y postgresql postgresql-contrib
echo -e "${YELLOW}Step 2: Starting PostgreSQL${NC}"
sudo systemctl start postgresql
sudo systemctl enable postgresql
echo -e "${YELLOW}Step 3: Setting postgres user password${NC}"
sudo -u postgres psql -c "ALTER USER postgres WITH PASSWORD '$POSTGRES_PASSWORD';"
echo -e "${YELLOW}Step 4: Configuring PostgreSQL for remote connections${NC}"
# Backup original config
sudo cp /etc/postgresql/*/main/postgresql.conf /etc/postgresql/*/main/postgresql.conf.bak
sudo cp /etc/postgresql/*/main/pg_hba.conf /etc/postgresql/*/main/pg_hba.conf.bak
# Allow remote connections
PG_VERSION=$(ls /etc/postgresql/)
echo "listen_addresses = '*'" | sudo tee -a /etc/postgresql/$PG_VERSION/main/postgresql.conf
# Allow password authentication
echo "host all all 0.0.0.0/0 md5" | sudo tee -a /etc/postgresql/$PG_VERSION/main/pg_hba.conf
echo "host all all ::/0 md5" | sudo tee -a /etc/postgresql/$PG_VERSION/main/pg_hba.conf
echo -e "${YELLOW}Step 5: Restarting PostgreSQL${NC}"
sudo systemctl restart postgresql
echo -e "${YELLOW}Step 6: Verifying Installation${NC}"
sudo -u postgres psql -c "SELECT version();"
echo -e "${GREEN}PostgreSQL installed successfully!${NC}"
echo -e "Connection details:"
echo -e " Host: localhost (or 192.168.1.200)"
echo -e " Port: 5432"
echo -e " User: postgres"
echo -e " Password: $POSTGRES_PASSWORD"
echo ""
echo -e "${YELLOW}Test connection:${NC}"
echo " psql -h localhost -U postgres -d postgres"
What this script does:
- Installs PostgreSQL and contrib packages
- Enables PostgreSQL service to start on boot
- Sets password for
postgres
superuser
- Configures PostgreSQL to accept remote connections
- Updates authentication to use password (md5)
- Backs up original configuration files
Run the installation:
chmod +x install.sh
./install.sh
Expected output:
=== PostgreSQL Installation Script ===
Loaded configuration from .env file
Step 1: Installing PostgreSQL
Step 2: Starting PostgreSQL
Step 3: Setting postgres user password
ALTER ROLE
Step 4: Configuring PostgreSQL for remote connections
Step 5: Restarting PostgreSQL
Step 6: Verifying Installation
PostgreSQL installed successfully!
Verify that PostgreSQL service is running:
# Check service status
sudo systemctl status postgresql
# Check if port is listening
sudo ss -lntp | grep 5432
Expected output:
Active: active (exited) since ...
LISTEN 0 244 0.0.0.0:5432 0.0.0.0:*
Test PostgreSQL connection from the server:
# Connect as postgres user
sudo -u postgres psql
# Or with password authentication
psql -h localhost -U postgres -d postgres
# Enter password when prompted
# Check version
sudo -u postgres psql -c "SELECT version();"
# List databases
sudo -u postgres psql -c "\l"
# List users
sudo -u postgres psql -c "\du"
This script tests PostgreSQL connectivity from the server using psycopg2.
Create
test_postgres_server.py
file:
#!/usr/bin/env python3
"""
PostgreSQL Server Connection Test
Tests PostgreSQL connectivity from the server using psycopg2
"""
import sys
try:
import psycopg2
except ImportError:
print("โ Error: psycopg2 not installed")
print("Install with: pip3 install psycopg2-binary")
sys.exit(1)
def test_postgres():
try:
print("=== Testing PostgreSQL Connection from Server ===\n")
# Connection parameters
conn_params = {
'host': 'localhost',
'port': 5432,
'user': 'postgres',
'password': '${POSTGRES_PASSWORD}',
'database': 'postgres'
}
print(f"Connecting to PostgreSQL at {conn_params['host']}:{conn_params['port']}...")
conn = psycopg2.connect(**conn_params)
print("โ Connection successful\n")
# Get version
cursor = conn.cursor()
cursor.execute("SELECT version();")
version = cursor.fetchone()[0]
print(f"โ PostgreSQL version: {version.split(',')[0]}\n")
# Test database operations
cursor.execute("CREATE TABLE IF NOT EXISTS test_table (id SERIAL PRIMARY KEY, data TEXT);")
print("โ Table created: test_table\n")
cursor.execute("INSERT INTO test_table (data) VALUES (%s) RETURNING id;", ("Hello from Server!",))
test_id = cursor.fetchone()[0]
conn.commit()
print(f"โ Record inserted with ID: {test_id}\n")
cursor.execute("SELECT * FROM test_table WHERE id = %s;", (test_id,))
record = cursor.fetchone()
print(f"โ Record retrieved: ID={record[0]}, Data={record[1]}\n")
# Clean up
cursor.execute("DROP TABLE test_table;")
conn.commit()
print("โ Test table dropped\n")
cursor.close()
conn.close()
print("โ All tests passed!")
print("โ PostgreSQL is working correctly\n")
return 0
except psycopg2.OperationalError as e:
print(f"โ Connection Error: {e}")
print(" Check if PostgreSQL is running: sudo systemctl status postgresql")
return 1
except psycopg2.Error as e:
print(f"โ Database Error: {e}")
return 1
except Exception as e:
print(f"โ Error: {e}")
return 1
if __name__ == "__main__":
sys.exit(test_postgres())
Run on server:
# Install psycopg2 if not already installed
pip3 install psycopg2-binary
# Run the test
python3 test_postgres_server.py
Expected output:
=== Testing PostgreSQL Connection from Server ===
Connecting to PostgreSQL at localhost:5432...
โ Connection successful
โ PostgreSQL version: PostgreSQL 14.x
โ Table created: test_table
โ Record inserted with ID: 1
โ Record retrieved: ID=1, Data=Hello from Server!
โ Test table dropped
โ All tests passed!
โ PostgreSQL is working correctly
This script tests PostgreSQL connectivity from your Mac.
Create
test_postgres_mac.sh
file:
#!/bin/bash
# PostgreSQL Mac Connection Test
# Tests PostgreSQL connectivity from your Mac
set -e
echo "=== Testing PostgreSQL Connection from Mac ==="
echo ""
# Colors
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Configuration
PG_HOST="${PG_HOST:-192.168.1.200}"
PG_PORT="${PG_PORT:-5432}"
PG_USER="${PG_USER:-postgres}"
PG_PASSWORD="${PG_PASSWORD:-changeme}"
PG_DATABASE="${PG_DATABASE:-postgres}"
# Test 1: Check if PostgreSQL is accessible
echo -e "${YELLOW}Test 1: Checking PostgreSQL accessibility...${NC}"
if command -v psql &> /dev/null; then
export PGPASSWORD="$PG_PASSWORD"
if psql -h "$PG_HOST" -p "$PG_PORT" -U "$PG_USER" -d "$PG_DATABASE" -c "SELECT 1;" > /dev/null 2>&1; then
echo -e "${GREEN}โ PostgreSQL is accessible${NC}"
else
echo -e "${RED}โ Failed to connect to PostgreSQL${NC}"
echo " Make sure PostgreSQL is configured to allow remote connections"
exit 1
fi
else
echo -e "${RED}โ psql command not found${NC}"
echo " Install with: brew install postgresql"
exit 1
fi
echo ""
# Test 2: Get PostgreSQL version
echo -e "${YELLOW}Test 2: Getting PostgreSQL version...${NC}"
VERSION=$(psql -h "$PG_HOST" -p "$PG_PORT" -U "$PG_USER" -d "$PG_DATABASE" -t -c "SELECT version();" 2>/dev/null | xargs)
echo -e "${GREEN}โ PostgreSQL version: ${VERSION:0:50}...${NC}"
echo ""
# Test 3: List databases
echo -e "${YELLOW}Test 3: Listing databases...${NC}"
DATABASES=$(psql -h "$PG_HOST" -p "$PG_PORT" -U "$PG_USER" -d "$PG_DATABASE" -t -c "\l" 2>/dev/null | grep -c "|")
echo -e "${GREEN}โ Found $DATABASES databases${NC}"
echo ""
# Test 4: Create test table
echo -e "${YELLOW}Test 4: Creating test table...${NC}"
psql -h "$PG_HOST" -p "$PG_PORT" -U "$PG_USER" -d "$PG_DATABASE" -c "CREATE TABLE IF NOT EXISTS mac_test_table (id SERIAL PRIMARY KEY, data TEXT, created_at TIMESTAMP DEFAULT NOW());" > /dev/null 2>&1
echo -e "${GREEN}โ Test table created${NC}"
echo ""
# Test 5: Insert and retrieve data
echo -e "${YELLOW}Test 5: Testing data operations...${NC}"
psql -h "$PG_HOST" -p "$PG_PORT" -U "$PG_USER" -d "$PG_DATABASE" -c "INSERT INTO mac_test_table (data) VALUES ('Hello from Mac!');" > /dev/null 2>&1
RECORD=$(psql -h "$PG_HOST" -p "$PG_PORT" -U "$PG_USER" -d "$PG_DATABASE" -t -c "SELECT data FROM mac_test_table ORDER BY id DESC LIMIT 1;" 2>/dev/null | xargs)
echo -e "${GREEN}โ Record inserted and retrieved: '$RECORD'${NC}"
echo ""
# Test 6: Clean up
echo -e "${YELLOW}Test 6: Cleaning up test table...${NC}"
psql -h "$PG_HOST" -p "$PG_PORT" -U "$PG_USER" -d "$PG_DATABASE" -c "DROP TABLE IF EXISTS mac_test_table;" > /dev/null 2>&1
echo -e "${GREEN}โ Test table dropped${NC}"
echo ""
# Unset password
unset PGPASSWORD
echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN}โ All tests passed!${NC}"
echo -e "${GREEN}โ PostgreSQL is working correctly${NC}"
echo -e "${GREEN}========================================${NC}"
Run from your Mac:
# Make script executable
chmod +x test_postgres_mac.sh
# Set environment variables (optional)
export PG_HOST=192.168.1.200
export PG_USER=postgres
export PG_PASSWORD=your_password
# Run the test
./test_postgres_mac.sh
Expected output:
=== Testing PostgreSQL Connection from Mac ===
Test 1: Checking PostgreSQL accessibility...
โ PostgreSQL is accessible
Test 2: Getting PostgreSQL version...
โ PostgreSQL version: PostgreSQL 14.x (Ubuntu 14.x-1.pgdg22.04+1) on x86...
Test 3: Listing databases...
โ Found 3 databases
Test 4: Creating test table...
โ Test table created
Test 5: Testing data operations...
โ Record inserted and retrieved: 'Hello from Mac!'
Test 6: Cleaning up test table...
โ Test table dropped
========================================
โ All tests passed!
โ PostgreSQL is working correctly
========================================
For secure connections from outside your local network, you can set up an nginx stream proxy with TLS encryption.
Create
nginx-stream.conf
file:
# Add this to the stream block in /etc/nginx/nginx.conf
# PostgreSQL TCP Passthrough (PostgreSQL handles SSL itself)
# Note: Unlike Redis, PostgreSQL uses binary protocol that cannot work
# through nginx SSL termination. We use TCP passthrough instead.
upstream postgres_backend {
server 127.0.0.1:5432;
}
server {
listen 9552;
proxy_pass postgres_backend;
proxy_connect_timeout 10s;
proxy_timeout 300s;
}
Important: Why TCP Passthrough Instead of SSL Termination?
PostgreSQL uses a complex binary protocol with specific handshake requirements that get corrupted when nginx terminates SSL and forwards plain TCP. This is different from Redis, which uses a simple text-based protocol that works fine with nginx SSL termination.
With TCP passthrough:
- Nginx simply forwards TCP packets without decryption (no SSL configuration on nginx)
- PostgreSQL handles SSL/TLS encryption itself (already configured in
postgresql.conf
with
ssl=on
)
- Connection is still encrypted end-to-end using PostgreSQL's native SSL
- Port 9552 is used instead of default 5432 for security (avoiding bot scans on standard ports)
โ ๏ธ Required for external access (from outside your home network)
If you want to access PostgreSQL from outside your local network (e.g., from mobile data, other locations), you need to configure port forwarding on your router.
Steps for Airtel Router:
http://192.168.1.1
Enter admin credentials
Navigate to Port Forwarding:
NAT
โ
Port Forwarding
tab
Click "Add new rule"
Configure port forwarding rule:
Protocol: TCP (or TCP/UDP)
Activate the rule:
Verify port forwarding:
# From external network (mobile data or different location)
psql "host=postgres.arpansahu.space port=9552 user=postgres dbname=postgres sslmode=require" -c "SELECT version();"
Note: Port forwarding is NOT required if you only access PostgreSQL from devices on the same local network (192.168.1.x).
Create
add-nginx-stream.sh
file:
#!/bin/bash
set -e
echo "=== PostgreSQL Nginx Stream Configuration Script ==="
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m'
# Check if running as root
if [ "$EUID" -ne 0 ]; then
echo -e "${RED}Please run with sudo${NC}"
exit 1
fi
echo -e "${YELLOW}Step 1: Backing up nginx.conf${NC}"
cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup-postgres-$(date +%Y%m%d-%H%M%S)
echo -e "${YELLOW}Step 2: Adding PostgreSQL stream configuration${NC}"
# Check if stream block already exists
if ! grep -q "stream {" /etc/nginx/nginx.conf; then
echo -e "${YELLOW}Stream block not found, adding it to nginx.conf${NC}"
cat >> /etc/nginx/nginx.conf << 'EOF'
# Stream configuration for TCP/UDP load balancing
stream {
# PostgreSQL TCP Passthrough (PostgreSQL handles SSL itself)
upstream postgres_backend {
server 127.0.0.1:5432;
}
server {
listen 9552;
proxy_pass postgres_backend;
proxy_connect_timeout 10s;
proxy_timeout 300s;
}
}
EOF
echo -e "${GREEN}Stream block with PostgreSQL configuration added${NC}"
else
# Check if PostgreSQL stream config already exists
if grep -q "# PostgreSQL" /etc/nginx/nginx.conf; then
echo -e "${YELLOW}PostgreSQL stream configuration already exists, skipping...${NC}"
else
# Get the script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Insert the configuration before the closing brace of the stream block
# Read the stream config and append it
grep -v "^# Add this to" "$SCRIPT_DIR/nginx-stream.conf" | \
sed -i '/^stream {/r /dev/stdin' /etc/nginx/nginx.conf
echo -e "${GREEN}PostgreSQL stream configuration added${NC}"
fi
fi
echo -e "${YELLOW}Step 3: Testing nginx configuration${NC}"
if nginx -t; then
echo -e "${GREEN}Nginx configuration is valid${NC}"
else
echo -e "${RED}Nginx configuration test failed!${NC}"
echo "Restoring backup..."
cp /etc/nginx/nginx.conf.backup-postgres-$(date +%Y%m%d-%H%M%S) /etc/nginx/nginx.conf
exit 1
fi
echo -e "${YELLOW}Step 4: Reloading nginx${NC}"
systemctl reload nginx
echo -e "${YELLOW}Step 5: Checking if port 9552 is listening${NC}"
sleep 2
if ss -lntp | grep -q ":9552"; then
echo -e "${GREEN}โ Nginx is listening on port 9552${NC}"
else
echo -e "${RED}โ Port 9552 is not listening${NC}"
exit 1
fi
echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN}PostgreSQL Nginx Stream Setup Complete!${NC}"
echo -e "${GREEN}========================================${NC}"
echo ""
echo "PostgreSQL is now accessible via:"
echo " - Local: localhost:5432"
echo " - TLS (via nginx): postgres.arpansahu.space:9552"
echo ""
echo "Test connection:"
echo " psql 'host=postgres.arpansahu.space port=9552 user=postgres dbname=postgres sslmode=require'"
Run the setup:
chmod +x add-nginx-stream.sh
sudo ./add-nginx-stream.sh
Create
test_postgres_mac_tls.sh
file:
#!/bin/bash
# PostgreSQL Mac TLS Connection Test (via Nginx)
# Tests PostgreSQL connectivity from your Mac through nginx TLS proxy
set -e
echo "=== Testing PostgreSQL TLS Connection from Mac (via Nginx) ==="
echo ""
# Colors
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Configuration
PG_HOST="${PG_HOST:-postgres.arpansahu.space}"
PG_PORT="${PG_PORT:-9552}"
PG_USER="${PG_USER:-postgres}"
PG_PASSWORD="${PG_PASSWORD:-changeme}"
PG_DATABASE="${PG_DATABASE:-postgres}"
# Test 1: Check if PostgreSQL is accessible via TLS
echo -e "${YELLOW}Test 1: Checking PostgreSQL TLS accessibility...${NC}"
if command -v psql &> /dev/null; then
export PGPASSWORD="$PG_PASSWORD"
if psql "host=$PG_HOST port=$PG_PORT user=$PG_USER dbname=$PG_DATABASE sslmode=require" -c "SELECT 1;" > /dev/null 2>&1; then
echo -e "${GREEN}โ PostgreSQL is accessible via TLS (nginx proxy)${NC}"
else
echo -e "${RED}โ Failed to connect to PostgreSQL via TLS${NC}"
echo " Make sure nginx stream is configured and port 9552 is open"
exit 1
fi
else
echo -e "${RED}โ psql command not found${NC}"
echo " Install with: brew install postgresql"
exit 1
fi
echo ""
# Test 2: Verify TLS connection is being used
echo -e "${YELLOW}Test 2: Verifying TLS connection...${NC}"
CONNECTION_INFO=$(psql "host=$PG_HOST port=$PG_PORT user=$PG_USER dbname=$PG_DATABASE sslmode=require" -t -c "SELECT 'Connected via TLS on port $PG_PORT';" 2>/dev/null | xargs)
echo -e "${GREEN}โ $CONNECTION_INFO${NC}"
echo ""
# Test 3: Get PostgreSQL version via TLS
echo -e "${YELLOW}Test 3: Getting PostgreSQL version via TLS...${NC}"
VERSION=$(psql "host=$PG_HOST port=$PG_PORT user=$PG_USER dbname=$PG_DATABASE sslmode=require" -t -c "SELECT version();" 2>/dev/null | xargs)
echo -e "${GREEN}โ PostgreSQL version: ${VERSION:0:50}...${NC}"
echo ""
# Test 4: List databases via TLS
echo -e "${YELLOW}Test 4: Listing databases via TLS...${NC}"
DATABASES=$(psql "host=$PG_HOST port=$PG_PORT user=$PG_USER dbname=$PG_DATABASE sslmode=require" -t -c "\l" 2>/dev/null | grep -c "|")
echo -e "${GREEN}โ Found $DATABASES databases${NC}"
echo ""
# Test 5: Create test table via TLS
echo -e "${YELLOW}Test 5: Creating test table via TLS...${NC}"
psql "host=$PG_HOST port=$PG_PORT user=$PG_USER dbname=$PG_DATABASE sslmode=require" -c "CREATE TABLE IF NOT EXISTS mac_tls_test_table (id SERIAL PRIMARY KEY, data TEXT, created_at TIMESTAMP DEFAULT NOW());" > /dev/null 2>&1
echo -e "${GREEN}โ Test table created via TLS${NC}"
echo ""
# Test 6: Insert and retrieve data via TLS
echo -e "${YELLOW}Test 6: Testing data operations via TLS...${NC}"
psql "host=$PG_HOST port=$PG_PORT user=$PG_USER dbname=$PG_DATABASE sslmode=require" -c "INSERT INTO mac_tls_test_table (data) VALUES ('Hello from Mac via TLS!');" > /dev/null 2>&1
RECORD=$(psql "host=$PG_HOST port=$PG_PORT user=$PG_USER dbname=$PG_DATABASE sslmode=require" -t -c "SELECT data FROM mac_tls_test_table ORDER BY id DESC LIMIT 1;" 2>/dev/null | xargs)
echo -e "${GREEN}โ Record inserted and retrieved via TLS: '$RECORD'${NC}"
echo ""
# Test 7: Clean up
echo -e "${YELLOW}Test 7: Cleaning up test table...${NC}"
psql "host=$PG_HOST port=$PG_PORT user=$PG_USER dbname=$PG_DATABASE sslmode=require" -c "DROP TABLE IF EXISTS mac_tls_test_table;" > /dev/null 2>&1
echo -e "${GREEN}โ Test table dropped${NC}"
echo ""
# Unset password
unset PGPASSWORD
echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN}โ All TLS tests passed!${NC}"
echo -e "${GREEN}โ PostgreSQL is working correctly via Nginx TLS proxy${NC}"
echo -e "${GREEN}========================================${NC}"
echo ""
echo "Connection used:"
echo " Host: $PG_HOST"
echo " Port: $PG_PORT (TLS via nginx)"
echo " SSL Mode: require"
Run from your Mac:
# Make script executable
chmod +x test_postgres_mac_tls.sh
# Set password and run
PG_PASSWORD=your_password ./test_postgres_mac_tls.sh
Expected output:
=== Testing PostgreSQL TLS Connection from Mac (via Nginx) ===
Test 1: Checking PostgreSQL TLS accessibility...
โ PostgreSQL is accessible via TLS (nginx proxy)
Test 2: Verifying TLS connection...
โ Connected via TLS on port 9552
Test 3: Getting PostgreSQL version via TLS...
โ PostgreSQL version: PostgreSQL 14.x (Ubuntu 14.x-1.pgdg22.04+1) on x86...
Test 4: Listing databases via TLS...
โ Found 3 databases
Test 5: Creating test table via TLS...
โ Test table created via TLS
Test 6: Testing data operations via TLS...
โ Record inserted and retrieved via TLS: 'Hello from Mac via TLS!'
Test 7: Cleaning up test table...
โ Test table dropped
========================================
โ All TLS tests passed!
โ PostgreSQL is working correctly via Nginx TLS proxy
========================================
Connection used:
Host: postgres.arpansahu.space
Port: 9552 (TLS via nginx)
SSL Mode: require
After successful installation, your PostgreSQL setup will have:
192.168.1.200
or
localhost
5432
postgres.arpansahu.space
9552
(via nginx)
postgres
${POSTGRES_PASSWORD}
(from your .env file)
# Method 1: Using createdb command
sudo -u postgres createdb myapp_db
# Method 2: Using SQL
sudo -u postgres psql -c "CREATE DATABASE myapp_db;"
sudo -u postgres psql << EOF
CREATE USER myapp_user WITH PASSWORD 'myapp_password';
GRANT ALL PRIVILEGES ON DATABASE myapp_db TO myapp_user;
ALTER DATABASE myapp_db OWNER TO myapp_user;
EOF
# As postgres superuser
sudo -u postgres psql -d myapp_db
# As custom user
psql -h localhost -U myapp_user -d myapp_db
# From remote machine
psql -h 192.168.1.200 -U myapp_user -d myapp_db
# Backup single database
sudo -u postgres pg_dump myapp_db > myapp_db_backup_$(date +%Y%m%d).sql
# Backup all databases
sudo -u postgres pg_dumpall > all_databases_backup_$(date +%Y%m%d).sql
# Compressed backup
sudo -u postgres pg_dump myapp_db | gzip > myapp_db_backup_$(date +%Y%m%d).sql.gz
# Restore from backup
sudo -u postgres psql myapp_db < myapp_db_backup_20240101.sql
# Restore compressed backup
gunzip < myapp_db_backup_20240101.sql.gz | sudo -u postgres psql myapp_db
# Restore all databases
sudo -u postgres psql < all_databases_backup_20240101.sql
Install the psycopg2 library:
pip install psycopg2-binary
Basic connection:
import psycopg2
# Connection parameters
conn = psycopg2.connect(
host='192.168.1.200',
port=5432,
database='myapp_db',
user='myapp_user',
password='myapp_password'
)
# Create cursor
cursor = conn.cursor()
# Execute query
cursor.execute("SELECT * FROM users;")
rows = cursor.fetchall()
for row in rows:
print(row)
# Close connection
cursor.close()
conn.close()
Add PostgreSQL configuration in
settings.py
:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'myapp_db',
'USER': 'myapp_user',
'PASSWORD': 'myapp_password',
'HOST': '192.168.1.200',
'PORT': '5432',
}
}
Install psycopg2:
pip install psycopg2-binary
Run migrations:
python manage.py migrate
If PostgreSQL service is not running:
# Check service status
sudo systemctl status postgresql
# Start service
sudo systemctl start postgresql
# Restart service
sudo systemctl restart postgresql
# View logs
sudo journalctl -u postgresql -n 50
If you cannot connect remotely:
# Check if PostgreSQL is listening on all interfaces
sudo ss -lntp | grep 5432
# Should show: 0.0.0.0:5432
# Check postgresql.conf
PG_VERSION=$(ls /etc/postgresql/)
sudo grep listen_addresses /etc/postgresql/$PG_VERSION/main/postgresql.conf
# Should show: listen_addresses = '*'
# Check pg_hba.conf for remote access rules
sudo cat /etc/postgresql/$PG_VERSION/main/pg_hba.conf | grep "0.0.0.0/0"
# Should show: host all all 0.0.0.0/0 md5
# Restart after changes
sudo systemctl restart postgresql
If you get "authentication failed" errors:
# Reset postgres password
sudo -u postgres psql -c "ALTER USER postgres WITH PASSWORD 'new_password';"
# Check pg_hba.conf authentication method
PG_VERSION=$(ls /etc/postgresql/)
sudo cat /etc/postgresql/$PG_VERSION/main/pg_hba.conf
# Ensure md5 authentication is enabled (not peer or ident)
"Server closed connection unexpectedly" error:
This occurs when trying to use nginx SSL termination with PostgreSQL. Unlike Redis (text protocol), PostgreSQL uses a binary protocol that cannot work through nginx SSL termination.
Solution:
Use TCP passthrough configuration (already configured in our setup):
- Nginx configuration should NOT have SSL directives (
listen 9552;
instead of
listen 9552 ssl;
)
- PostgreSQL handles SSL encryption itself (configured with
ssl=on
in postgresql.conf)
- Connection is still encrypted end-to-end, just by PostgreSQL instead of nginx
Verify configuration:
# Check nginx stream configuration
sudo grep -A 10 "postgres_backend" /etc/nginx/nginx.conf
# Should NOT show ssl_certificate or ssl_certificate_key
# Check if port 9552 is listening
sudo ss -lntp | grep 9552
# Should show nginx listening on port 9552
# Test connection from Mac
psql "host=postgres.arpansahu.space port=9552 user=postgres dbname=postgres sslmode=prefer"
Why TCP passthrough instead of SSL termination?
-
Redis:
Simple text-based protocol โ Works with nginx SSL termination
-
PostgreSQL:
Complex binary protocol with handshake โ Requires TCP passthrough
- Both methods provide end-to-end encryption, just handled at different layers
If port 5432 is already in use:
# Check what's using the port
sudo ss -lntp | grep 5432
# Kill the process if needed
sudo kill -9 <PID>
# Or change PostgreSQL port in postgresql.conf
ssl=on
)
.env.example
.env
(create from .env.example)
install.sh
nginx-stream.conf
add-nginx-stream.sh
test_postgres_localhost.py
- Run on server
test_postgres_mac.sh
- Run from Mac
test_postgres_domain_tls.sh
- Run from Mac
# Install PostgreSQL
./install.sh
# Set up nginx TLS proxy (optional)
sudo ./add-nginx-stream.sh
# Service management
sudo systemctl start postgresql
sudo systemctl stop postgresql
sudo systemctl restart postgresql
sudo systemctl status postgresql
# Connect to database
sudo -u postgres psql
psql -h localhost -U postgres -d postgres
# Connect via TLS (from remote)
psql 'host=postgres.arpansahu.space port=9552 user=postgres dbname=postgres sslmode=require'
# Create database
sudo -u postgres createdb myapp_db
# Backup database
sudo -u postgres pg_dump myapp_db > backup.sql
# Test connections
python3 test_postgres_localhost.py # On server
PG_PASSWORD=password ./test_postgres_mac.sh # From Mac (direct)
PG_PASSWORD=password ./test_postgres_domain_tls.sh # From Mac (TLS)
# View logs
sudo journalctl -u postgresql -f
/etc/postgresql/*/main/postgresql.conf
/etc/postgresql/*/main/pg_hba.conf
/var/lib/postgresql/*/main/
/var/log/postgresql/
Use PgAdmin for GUI management:
- URL: https://pgadmin.arpansahu.space
- Add server with: Host=192.168.1.200, Port=5432, User=postgres
- See
PgAdmin README
for details
[Your Application] โโโโโโโโโโโโโโโโโโโโโโโ
โ โ
โ Direct: Port 5432 (TCP) โ TLS: Port 9552 (TCP)
โ โ
[PostgreSQL Server] [Nginx Stream Proxy]
(Native system service) (TLS Termination)
โ โ
[/var/lib/postgresql/data] โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
localhost:5432
Access Methods:
-
Local/Direct:
Connect directly to 192.168.1.200:5432 (no encryption)
-
Remote/TLS:
Connect via postgres.arpansahu.space:9552 (TLS encrypted via nginx)
-
No Proxy Layer:
Direct database access on port 5432
-
TLS Proxy:
Nginx stream proxies TLS connections to PostgreSQL
Redis is a high-performance in-memory data store. This setup provides secure Redis with TLS encryption via Nginx.
| Test File | Where to Run | Connection Type | Purpose |
|---|---|---|---|
test_redis_localhost.py
|
On Server | localhost:6380 | Test Redis on server without TLS |
test_redis_domain_tls.py
|
From Mac | redis.arpansahu.space:9551 | Test Redis from Mac with TLS via domain |
Quick Test Commands:
# On Server (localhost)
python3 test_redis_localhost.py
# From Mac (domain with TLS)
python3 test_redis_domain_tls.py
CLI Testing (redis-cli):
# On Server (localhost) - Basic commands
redis-cli -h 127.0.0.1 -p 6380 -a ${REDIS_PASSWORD} ping
redis-cli -h 127.0.0.1 -p 6380 -a ${REDIS_PASSWORD} SET test "hello"
redis-cli -h 127.0.0.1 -p 6380 -a ${REDIS_PASSWORD} GET test
# From Mac (domain with TLS) - Requires redis-cli with TLS support
redis-cli -h redis.arpansahu.space -p 9551 --tls --insecure -a ${REDIS_PASSWORD} ping
redis-cli -h redis.arpansahu.space -p 9551 --tls --insecure -a ${REDIS_PASSWORD} INFO server
First, create the environment configuration file that will store your Redis password and port.
Create
.env.example
(Template file):
# Redis Configuration
REDIS_PASSWORD=your_secure_password_here
REDIS_PORT=6380
Create your actual
.env
file:
cd "AWS Deployment/redis"
cp .env.example .env
nano .env
Your
.env
file should look like this (with your actual password):
# Redis Configuration
REDIS_PASSWORD=Kesar302redis
REDIS_PORT=6380
โ ๏ธ Important:
- Always change the default password in production!
- Never commit your
.env
file to version control
- Keep the
.env.example
file as a template
Create the
install.sh
script that will automatically install Redis using the environment variables.
Create
install.sh
file:
#!/bin/bash
set -e
echo "=== Redis Installation Script ==="
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Load environment variables
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
if [ -f "$SCRIPT_DIR/.env" ]; then
export $(grep -v '^#' "$SCRIPT_DIR/.env" | xargs)
echo -e "${GREEN}Loaded configuration from .env file${NC}"
else
echo -e "${YELLOW}Warning: .env file not found. Using defaults.${NC}"
echo -e "${YELLOW}Please copy .env.example to .env and configure it.${NC}"
fi
# Configuration with defaults
REDIS_PASSWORD="${REDIS_PASSWORD:-Kesar302redis}"
REDIS_PORT="${REDIS_PORT:-6380}"
echo -e "${YELLOW}Step 1: Running Redis Container${NC}"
docker run -d \
--name redis-external \
--restart unless-stopped \
-p 127.0.0.1:${REDIS_PORT}:6379 \
redis:7 \
redis-server --requirepass "$REDIS_PASSWORD"
echo -e "${YELLOW}Step 2: Waiting for Redis to start...${NC}"
sleep 3
echo -e "${YELLOW}Step 3: Verifying Installation${NC}"
docker ps | grep redis-external
echo -e "${GREEN}Redis installed successfully!${NC}"
echo -e "Container: redis-external"
echo -e "Port: 127.0.0.1:${REDIS_PORT}"
echo -e "Password: $REDIS_PASSWORD"
echo ""
echo -e "${YELLOW}Test connection:${NC}"
echo "redis-cli -h 127.0.0.1 -p ${REDIS_PORT} -a $REDIS_PASSWORD ping"
echo ""
echo -e "${YELLOW}Next steps for HTTPS access:${NC}"
echo "1. Configure Nginx stream block in /etc/nginx/nginx.conf"
echo "2. See nginx-stream.conf for configuration"
echo "3. Test and reload: sudo nginx -t && sudo systemctl reload nginx"
Make it executable and run:
chmod +x install.sh
./install.sh
Expected output:
=== Redis Installation Script ===
Loaded configuration from .env file
Step 1: Running Redis Container
Step 2: Waiting for Redis to start...
Step 3: Verifying Installation
redis-external
Redis installed successfully!
Container: redis-external
Port: 127.0.0.1:6380
Password: Kesar302redis
Redis uses TCP protocol, not HTTP, so we need to configure Nginx's stream module (Layer 4 proxy).
Create
nginx-stream.conf
file:
# Add this to the stream {} block in /etc/nginx/nginx.conf
stream {
# Redis upstream
upstream redis_upstream {
server 127.0.0.1:6380;
}
# Redis with TLS
server {
listen 9551 ssl;
proxy_pass redis_upstream;
# SSL Configuration
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# Connection settings
proxy_connect_timeout 5s;
proxy_timeout 300s;
proxy_buffer_size 16k;
}
}
What this configuration does:
- Listens on port 9551 with SSL/TLS encryption
- Proxies TCP connections to Redis on localhost:6380
- Uses your wildcard SSL certificate for *.arpansahu.space
- Allows external TLS connections to redis.arpansahu.space:9551
You have two options to apply the stream configuration:
Create
add-nginx-stream.sh
script:
#!/bin/bash
set -e
echo "=== Adding Redis Stream Block to Nginx ==="
# Colors
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Get script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo -e "${YELLOW}Step 1: Backing up nginx.conf${NC}"
sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup-$(date +%Y%m%d-%H%M%S)
echo -e "${YELLOW}Step 2: Adding stream block from nginx-stream.conf${NC}"
# Remove the comment line and add to nginx.conf
grep -v "^# Add this to" "$SCRIPT_DIR/nginx-stream.conf" | sudo tee -a /etc/nginx/nginx.conf > /dev/null
echo -e "${YELLOW}Step 3: Testing nginx configuration${NC}"
sudo nginx -t
echo -e "${YELLOW}Step 4: Reloading nginx${NC}"
sudo systemctl reload nginx
echo -e "${YELLOW}Step 5: Verifying port 9551${NC}"
ss -lntp | grep 9551 || echo "Port not yet visible (may need a moment)"
echo -e "${GREEN}Redis TLS stream configured successfully!${NC}"
echo -e "Test with: redis-cli -h redis.arpansahu.space -p 9551 --tls --insecure -a PASSWORD ping"
Run the script:
chmod +x add-nginx-stream.sh
sudo bash add-nginx-stream.sh
Expected output:
=== Adding Redis Stream Block to Nginx ===
Step 1: Backing up nginx.conf
Step 2: Adding stream block from nginx-stream.conf
Step 3: Testing nginx configuration
nginx: configuration file /etc/nginx/nginx.conf test is successful
Step 4: Reloading nginx
Step 5: Verifying port 9551
LISTEN 0 511 0.0.0.0:9551 0.0.0.0:*
Redis TLS stream configured successfully!
# 1. Backup nginx.conf
sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.backup
# 2. Edit nginx.conf
sudo nano /etc/nginx/nginx.conf
# 3. Add the stream block from nginx-stream.conf at the END of the file
# (outside the http block, at the same level)
# 4. Test configuration
sudo nginx -t
# 5. Reload nginx
sudo systemctl reload nginx
# 6. Verify port is listening
sudo ss -lntp | grep 9551
After installation and Nginx configuration, test Redis connectivity using multiple methods:
Test Redis directly inside the container:
# Simple ping test
docker exec redis-external redis-cli -a ${REDIS_PASSWORD} ping
# Set and get a value
docker exec redis-external redis-cli -a ${REDIS_PASSWORD} SET mykey "Hello Redis"
docker exec redis-external redis-cli -a ${REDIS_PASSWORD} GET mykey
Expected output:
PONG
OK
"Hello Redis"
Test Redis from the server itself using redis-cli:
# Install redis-tools if not already installed
sudo apt install -y redis-tools
# Test connection
redis-cli -h 127.0.0.1 -p ${REDIS_PORT} -a ${REDIS_PASSWORD} ping
# Set and get a value
redis-cli -h 127.0.0.1 -p ${REDIS_PORT} -a ${REDIS_PASSWORD} SET testkey "Hello from Server"
redis-cli -h 127.0.0.1 -p ${REDIS_PORT} -a ${REDIS_PASSWORD} GET testkey
Expected output:
PONG
OK
"Hello from Server"
Test Redis from your local machine using TLS through Nginx:
# Install redis if not already installed
brew install redis
# Test connection via TLS (through nginx stream on port 9551)
redis-cli -h redis.arpansahu.space -p 9551 --tls --insecure -a ${REDIS_PASSWORD} ping
# Set and get a value
redis-cli -h redis.arpansahu.space -p 9551 --tls --insecure -a ${REDIS_PASSWORD} SET mackey "Hello from Mac"
redis-cli -h redis.arpansahu.space -p 9551 --tls --insecure -a ${REDIS_PASSWORD} GET mackey
Expected output:
PONG
OK
"Hello from Mac"
Note:
The
--insecure
flag skips certificate verification. For production, you should verify certificates properly.
After successful installation, your Redis setup will have:
redis-external
127.0.0.1:${REDIS_PORT}
(localhost only)
redis.arpansahu.space:9551
(accessible externally)
${REDIS_PASSWORD}
(from your .env file)
~/redis/data
(if you add volumes)
Install the Redis Python client:
pip install redis
Local connection (from server):
import redis
# Local connection
r = redis.Redis(
host='127.0.0.1',
port=${REDIS_PORT},
password='${REDIS_PASSWORD}',
decode_responses=True
)
# Test connection
r.set('test', 'hello')
print(r.get('test')) # Output: hello
TLS connection (from anywhere):
import redis
# TLS connection
r = redis.Redis(
host='redis.arpansahu.space',
port=9551,
password='${REDIS_PASSWORD}',
ssl=True,
ssl_cert_reqs='required',
decode_responses=True
)
# Test connection
r.set('test', 'hello')
print(r.get('test')) # Output: hello
โ ๏ธ Required for external access (from outside your home network)
If you want to access Redis from outside your local network (e.g., from mobile data, other locations), you need to configure port forwarding on your router.
Steps for Airtel Router:
http://192.168.1.1
Enter admin credentials
Navigate to Port Forwarding:
NAT
โ
Port Forwarding
tab
Click "Add new rule"
Configure port forwarding rule:
Protocol: TCP (or TCP/UDP)
Activate the rule:
Verify port forwarding:
# From external network (mobile data or different location)
redis-cli -h redis.arpansahu.space -p 9551 --tls -a your_password PING
Note: Port forwarding is NOT required if you only access Redis from devices on the same local network (192.168.1.x).
Add Redis as Django cache backend in
settings.py
:
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': 'rediss://redis.arpansahu.space:9551/0',
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'PASSWORD': '${REDIS_PASSWORD}',
}
}
}
Install django-redis:
pip install django-redis
Use in your Django views:
from django.core.cache import cache
# Set a value
cache.set('my_key', 'my_value', timeout=300)
# Get a value
value = cache.get('my_key')
If Redis container is not running properly:
# Check container logs
docker logs redis-external
# Restart container
docker restart redis-external
# Remove and reinstall
docker stop redis-external
docker rm redis-external
./install.sh
If you cannot connect to Redis:
For local connection:
# Check if container is running
docker ps | grep redis-external
# Check if port is listening
sudo ss -lntp | grep ${REDIS_PORT}
# Test connection
redis-cli -h 127.0.0.1 -p ${REDIS_PORT} -a ${REDIS_PASSWORD} ping
For TLS connection:
# Check if nginx is listening on port 9551
sudo ss -lntp | grep 9551
# Check nginx stream configuration
sudo nginx -T | grep -A 20 "stream {"
# Test with redis-cli
redis-cli -h redis.arpansahu.space -p 9551 -a ${REDIS_PASSWORD} --tls --insecure ping
If Nginx fails to start or reload:
# Test nginx configuration
sudo nginx -t
# Check nginx error logs
sudo tail -f /var/log/nginx/error.log
# Verify stream block exists
sudo grep -A 20 "stream {" /etc/nginx/nginx.conf
# Restore from backup if needed
sudo cp /etc/nginx/nginx.conf.backup-YYYYMMDD-HHMMSS /etc/nginx/nginx.conf
sudo nginx -t
sudo systemctl reload nginx
docker logs -f redis-external
If you have persistence enabled (with volumes):
# Create backup
tar -czf redis-backup-$(date +%Y%m%d).tar.gz ~/redis/data
# List backups
ls -lh redis-backup-*.tar.gz
To update to the latest Redis version:
# Pull latest Redis image
docker pull redis:7
# Stop and remove old container
docker stop redis-external
docker rm redis-external
# Run installation again
./install.sh
.env
file
.env
file to version control
.env.example
.env
(create from .env.example)
install.sh
nginx-stream.conf
add-nginx-stream.sh
test_redis_localhost.py
- Run on server
test_redis_domain_tls.py
- Run from Mac
# Install Redis
./install.sh
# Configure Nginx stream
sudo bash add-nginx-stream.sh
# Test connections
python3 test_redis_localhost.py # On server
python3 test_redis_domain_tls.py # From Mac
# Test with redis-cli (manual)
redis-cli -h 127.0.0.1 -p ${REDIS_PORT} -a ${REDIS_PASSWORD} ping
redis-cli -h redis.arpansahu.space -p 9551 --tls --insecure -a ${REDIS_PASSWORD} ping
# View logs
docker logs -f redis-external
# Restart container
docker restart redis-external
# Check nginx stream
sudo nginx -T | grep -A 20 "stream {"
[Your Application]
โ
[redis.arpansahu.space:9551] โ TLS encrypted
โ
[Nginx Stream Proxy] โ SSL termination
โ
[127.0.0.1:6380] โ Redis Container (localhost only)
Security layers:
1. Redis only accessible on localhost
2. Nginx provides TLS encryption for external access
3. Password authentication required for all connections
# Redis with TLS (rediss:// scheme)
REDIS_CLOUD_URL=rediss://:your_redis_password@redis.arpansahu.space:9551
import ssl
import os
REDIS_CLOUD_URL = os.getenv('REDIS_CLOUD_URL')
# Django Cache with Redis TLS
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': REDIS_CLOUD_URL,
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'CONNECTION_POOL_KWARGS': {
'ssl_cert_reqs': ssl.CERT_REQUIRED # Verify Let's Encrypt cert
}
}
}
}
# Celery broker and result backend
CELERY_BROKER_URL = REDIS_CLOUD_URL
CELERY_RESULT_BACKEND = REDIS_CLOUD_URL
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'UTC'
# Celery SSL Configuration
CELERY_REDIS_BACKEND_USE_SSL = {
'ssl_cert_reqs': ssl.CERT_REQUIRED # Verify SSL certificates
}
CELERY_BROKER_USE_SSL = {
'ssl_cert_reqs': ssl.CERT_REQUIRED # Verify SSL certificates
}
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'your_project.settings')
app = Celery('your_project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
@app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
django-redis>=5.2.0
redis>=4.5.0
celery>=5.2.0
# Test Django cache
python manage.py shell
>>> from django.core.cache import cache
>>> cache.set('test_key', 'test_value', 300)
>>> cache.get('test_key')
'test_value'
# Expected Celery worker logs
[INFO/MainProcess] Connected to rediss://:**@redis.arpansahu.space:9551//
[INFO/MainProcess] celery@your_project ready.
| Issue | Cause | Solution |
|---|---|---|
[SSL: CERTIFICATE_VERIFY_FAILED]
|
Wrong SSL verification setting |
Use
ssl.CERT_REQUIRED
- Let's Encrypt certs are trusted
|
| Connection timeout | Wrong port or host | Use port 9551, ensure nginx stream proxy is running |
| Authentication failed | Wrong password | Check REDIS_CLOUD_URL password matches Redis setup |
| Connection refused | Redis not running |
Check:
docker ps \| grep redis
|
Important:
Use
ssl.CERT_REQUIRED
(secure) instead of
ssl.CERT_NONE
(insecure).
Let's Encrypt certificates at
/etc/nginx/ssl/arpansahu.space/
are automatically trusted by Python's SSL library. No need to disable certificate verification.
# โ
CORRECT - Secure with certificate verification
'ssl_cert_reqs': ssl.CERT_REQUIRED
# โ WRONG - Insecure, don't use in production
'ssl_cert_reqs': ssl.CERT_NONE
MinIO is a high-performance, S3-compatible object storage solution perfect for storing static files, media uploads, backups, and any blob data.
cd "AWS Deployment/Minio"
./install.sh && ./add-nginx-config.sh
This will:
- Load environment variables from
.env
- Create data directory
- Start MinIO container
- Configure nginx for both Console and API
- Reload nginx
minio.arpansahu.space
โ 192.168.1.200 (Console)
minioapi.arpansahu.space
โ 192.168.1.200 (API)
/etc/letsencrypt/live/arpansahu.space/
.env
file (see Configuration)
Create
.env
from
.env.example
:
cp .env.example .env
Contents:
# MinIO Root Credentials
MINIO_ROOT_USER=arpansahu
MINIO_ROOT_PASSWORD=Gandu302@minio
# Port Configuration
MINIO_PORT=9000 # S3 API port (localhost only)
MINIO_CONSOLE_PORT=9002 # Console web UI port (localhost only)
# AWS/Django Access Keys (create these in MinIO console after installation)
AWS_ACCESS_KEY_ID=django_user
AWS_SECRET_ACCESS_KEY=Gandu302@djangominio
AWS_STORAGE_BUCKET_NAME=arpansahu-one-bucket
Note: The AWS credentials are for application access. Create these access keys via MinIO Console after installation.
./install.sh
This script:
1. Loads variables from
.env
2. Creates
~/minio/data
directory
3. Removes old container (if exists)
4. Starts new MinIO container
5. Exposes Console on port 9002 and API on port 9000 (localhost only)
# Load environment variables
source .env
# Create data directory
mkdir -p ~/minio/data
# Run MinIO
docker run -d \
--name minio \
--restart unless-stopped \
-p 127.0.0.1:${MINIO_PORT}:9000 \
-p 127.0.0.1:${MINIO_CONSOLE_PORT}:9001 \
-e "MINIO_ROOT_USER=${MINIO_ROOT_USER}" \
-e "MINIO_ROOT_PASSWORD=${MINIO_ROOT_PASSWORD}" \
-v ~/minio/data:/data \
quay.io/minio/minio:latest \
server /data --console-address ":9001"
./add-nginx-config.sh
This configures:
-
Console (Web UI):
minio.arpansahu.space โ localhost:9002
-
S3 API:
minioapi.arpansahu.space โ localhost:9000
client_max_body_size 500M
for large file uploads
To access MinIO from outside your local network:
admin
Password:
Gandmara302@
Configure Port Forwarding:
Navigate to: Advanced Settings โ NAT โ Virtual Server
Add HTTPS Rule (443):
| Field | Value |
|-------|-------|
| Service Name | MinIO HTTPS |
| External Port | 443 |
| Internal Port | 443 |
| Internal IP | 192.168.1.200 |
| Protocol | TCP |
| Status | Enabled |
Save and Apply
Note: Both
minio.arpansahu.spaceandminioapi.arpansahu.spaceuse port 443 (HTTPS), so only one port forwarding rule is needed.
| Component | URL | Port (localhost) |
|---|---|---|
| Console (Web UI) | https://minio.arpansahu.space | 9002 |
| S3 API | https://minioapi.arpansahu.space | 9000 |
Root Credentials:
- Username:
arpansahu
- Password:
Gandu302@minio
Visit https://minio.arpansahu.space and login with root credentials.
For Django applications, you typically need separate buckets for different types of content:
arpansahu-one-bucket
Create separate buckets for different access patterns:
A. Static Files Bucket (Public Read)
-
Name:
arpansahu-static
-
Purpose:
CSS, JS, fonts, images
-
Policy:
Public (download-only)
-
Why:
Static files need to be publicly accessible by browsers
B. Media Files Bucket (Private)
-
Name:
arpansahu-media
-
Purpose:
User uploads, documents, avatars
-
Policy:
Private (access via Django only)
-
Why:
User content should be access-controlled
C. Backups Bucket (Private)
-
Name:
arpansahu-backups
-
Purpose:
Database backups, snapshots
-
Policy:
Private (admin access only)
-
Why:
Sensitive data, no public access
| Policy Type | Description | Use Case |
|---|---|---|
| Private | No anonymous access | Media uploads, user files, backups |
| Public | Full anonymous read/write | โ Never use - security risk |
| Download | Anonymous read-only | โ Static files (CSS, JS, images) |
| Upload | Anonymous write-only | Rare use case |
| Custom | JSON policy rules | Fine-grained control |
# Private (default) - no anonymous access
mc anonymous set none myminio/arpansahu-media
# Public download-only - for static files
mc anonymous set download myminio/arpansahu-static
# Check current policy
mc anonymous get myminio/arpansahu-static
For fine-grained control, create a custom policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"AWS": ["*"]},
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::arpansahu-static/*"]
}
]
}
Apply via Console: Buckets โ Select bucket โ Access Policy โ Add Custom Policy
For easier policy management, use the included scripts:
1. Create Policy File
Copy the example and customize for your bucket:
# Copy template
cp minio_bucket_policy.json.example minio_bucket_policy.json
# Edit to match your bucket name and paths
nano minio_bucket_policy.json
Example policy for multi-project setup:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"AWS": ["*"]},
"Action": ["s3:GetObject"],
"Resource": [
"arn:aws:s3:::arpansahu-one-bucket/portfolio/*/static/*",
"arn:aws:s3:::arpansahu-one-bucket/portfolio/*/media/*"
]
}
]
}
2. Update .env File
Ensure your
.env
contains:
MINIO_ROOT_USER=arpansahu
MINIO_ROOT_PASSWORD=your_password_here
AWS_STORAGE_BUCKET_NAME=arpansahu-one-bucket
MINIO_ENDPOINT=https://minioapi.arpansahu.space
POLICY_FILE=minio_bucket_policy.json
3. Apply Policy Using Script
# Option 1: Interactive script (recommended)
chmod +x apply_minio_policy.sh
./apply_minio_policy.sh
# Option 2: Python script
pip install boto3 python-dotenv
python3 apply_policy.py
Available Methods:
| Method | Tool Required | Best For |
|---|---|---|
| MinIO Client (mc) |
brew install minio/stable/mc
|
Quick setup, simple policies |
| AWS CLI |
brew install awscli
|
AWS compatibility, automation |
| Python (boto3) |
pip install boto3
|
Complex policies, validation |
Verify Policy Applied:
# Using mc
mc anonymous get myminio/arpansahu-one-bucket
# Using AWS CLI
aws --endpoint-url=https://minioapi.arpansahu.space \
s3api get-bucket-policy \
--bucket arpansahu-one-bucket
# Test public access
curl https://minioapi.arpansahu.space/arpansahu-one-bucket/portfolio/django_starter/static/test.txt
Security Notes:
- โ
Scripts use environment variables (safe to commit)
- โ
Never commit
.env
or
minio_bucket_policy.json
with real credentials
- โ
.gitignore
includes these files by default
- โ
Use
.example
files as templates
For one bucket with different paths having different access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"AWS": ["*"]},
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::arpansahu-one-bucket/static/*"]
}
]
}
This policy makes:
-
static/*
โ
Public read
(anyone can access)
-
media/*
โ
Private
(requires authentication)
-
protected/*
โ
Private
(requires authentication + ownership check in Django)
- Everything else โ
Private
Folder Structure Option 1: Simple
arpansahu-one-bucket/
โโโ static/ # PUBLIC (anonymous read via bucket policy)
โ โโโ css/
โ โโโ js/
โ โโโ images/
โโโ media/ # PRIVATE (presigned URLs for authenticated users)
โ โโโ avatars/
โ โโโ uploads/
โโโ protected/ # PROTECTED (presigned URLs + ownership check)
โโโ invoices/
โโโ private-docs/
Folder Structure Option 2: Multi-Project with Portfolio Prefix โญ Recommended
For hosting multiple Django projects in one bucket:
your-bucket-name/
โโโ portfolio/ # PUBLIC (anonymous read via bucket policy)
โโโ django_starter/
โ โโโ static/
โ โโโ css/
โ โโโ js/
โ โโโ admin/
โโโ borcelle_crm/
โ โโโ static/
โโโ chew_and_cheer/
โ โโโ static/
โโโ arpansahu_dot_me/
โโโ static/
Set public access for portfolio prefix using mc:
# Install MinIO client first (if not installed)
# See "Install MinIO Client" section below
# Set up alias (one-time)
mc alias set myminio http://localhost:9000 your_minio_root_user your_minio_root_password
# Set public read access for portfolio/* prefix
mc anonymous set public myminio/your-bucket-name/portfolio/
# Verify the policy
mc anonymous get myminio/your-bucket-name/portfolio/
# Should return: Access permission for `your-bucket-name/portfolio/` is `public`
Why use portfolio/ prefix?
- โ
All projects share one bucket (cost-effective)
- โ
Clear organization and separation
- โ
Single bucket policy for all static files
- โ
Easy to add new projects
- โ
Consistent URL structure:
minioapi.arpansahu.space/your-bucket-name/portfolio/project-name/static/...
django_user
Gandu302@djangominio
readwrite
(or custom policy)
Security Note: Never use root credentials in applications. Always create separate access keys with minimal required permissions.
pip install django-storages boto3
# settings.py
# MinIO/S3 Configuration
AWS_S3_ENDPOINT_URL = "https://minioapi.arpansahu.space"
AWS_S3_VERIFY = True
AWS_ACCESS_KEY_ID = "django_user"
AWS_SECRET_ACCESS_KEY = "Gandu302@djangominio"
AWS_STORAGE_BUCKET_NAME = "arpansahu-one-bucket" # Single bucket for everything
AWS_S3_ADDRESSING_STYLE = "path"
AWS_DEFAULT_ACL = None
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
# Storage Backends (Django 4.2+)
STORAGES = {
"default": {
"BACKEND": "storages.backends.s3boto3.S3Boto3Storage",
},
"staticfiles": {
"BACKEND": "storages.backends.s3boto3.S3StaticStorage",
},
}
Note: With single bucket, set bucket policy to Private . Django will handle access control via presigned URLs.
# settings.py
# MinIO/S3 Configuration
AWS_S3_ENDPOINT_URL = "https://minioapi.arpansahu.space"
AWS_S3_VERIFY = True
AWS_ACCESS_KEY_ID = "django_user"
AWS_SECRET_ACCESS_KEY = "Gandu302@djangominio"
AWS_S3_ADDRESSING_STYLE = "path"
AWS_DEFAULT_ACL = None
# Static Files Bucket (Public Read)
AWS_STATIC_BUCKET_NAME = "arpansahu-static"
AWS_S3_CUSTOM_DOMAIN = f"{AWS_S3_ENDPOINT_URL.replace('https://', '')}/{AWS_STATIC_BUCKET_NAME}"
# Media Files Bucket (Private)
AWS_MEDIA_BUCKET_NAME = "arpansahu-media"
# Custom Storage Classes
from storages.backends.s3boto3 import S3Boto3Storage
class StaticStorage(S3Boto3Storage):
bucket_name = AWS_STATIC_BUCKET_NAME
default_acl = 'public-read' # Static files are publicly accessible
querystring_auth = False # No signed URLs needed
class MediaStorage(S3Boto3Storage):
bucket_name = AWS_MEDIA_BUCKET_NAME
default_acl = 'private' # Media files require authentication
file_overwrite = False # Don't overwrite files with same name
querystring_auth = True # Use presigned URLs for temporary access
querystring_expire = 3600 # URLs expire in 1 hour
# Storage Backends
STORAGES = {
"default": {
"BACKEND": "path.to.MediaStorage", # User uploads
},
"staticfiles": {
"BACKEND": "path.to.StaticStorage", # CSS, JS, images
},
}
Bucket Policies for Option 2:
-
arpansahu-static
: Set to
Download
(public read-only)
-
arpansahu-media
: Set to
Private
(Django controls access)
# settings.py
# MinIO/S3 Configuration
AWS_S3_ENDPOINT_URL = "https://minioapi.arpansahu.space"
AWS_S3_VERIFY = True
AWS_ACCESS_KEY_ID = "django_user"
AWS_SECRET_ACCESS_KEY = "Gandu302@djangominio"
AWS_STORAGE_BUCKET_NAME = "arpansahu-one-bucket"
AWS_S3_ADDRESSING_STYLE = "path"
AWS_DEFAULT_ACL = None
# Custom Storage Classes for Different Paths
from storages.backends.s3boto3 import S3Boto3Storage
class StaticStorage(S3Boto3Storage):
location = 'static' # Files stored in /static/ prefix
default_acl = 'public-read'
querystring_auth = False # No signed URLs (public access via bucket policy)
class MediaStorage(S3Boto3Storage):
location = 'media' # Files stored in /media/ prefix
default_acl = 'private'
file_overwrite = False
querystring_auth = True # Presigned URLs for authenticated users
querystring_expire = 3600 # 1 hour
class ProtectedStorage(S3Boto3Storage):
location = 'protected' # Files stored in /protected/ prefix
default_acl = 'private'
file_overwrite = False
querystring_auth = True
querystring_expire = 300 # 5 minutes (shorter for security)
# Storage Backends
STORAGES = {
"default": {
"BACKEND": "path.to.MediaStorage",
},
"staticfiles": {
"BACKEND": "path.to.StaticStorage",
},
}
Bucket Policy for Option 3:
Set custom policy (see step 3 above) to make
static/*
public, rest private.
Django Model Example with Protected Storage:
class Invoice(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
pdf = models.FileField(
upload_to='invoices/',
storage=lambda: storages['protected'] # Protected path
)
def download_invoice(request, invoice_id):
invoice = Invoice.objects.get(id=invoice_id)
if invoice.user != request.user:
return HttpResponseForbidden()
# Generate presigned URL only for owner
storage = ProtectedStorage()
url = storage.url(invoice.pdf.name)
return redirect(url)
# Single Bucket Setup
AWS_S3_ENDPOINT_URL="https://minioapi.arpansahu.space"
AWS_S3_VERIFY=True
AWS_ACCESS_KEY_ID="django_user"
AWS_SECRET_ACCESS_KEY="Gandu302@djangominio"
AWS_STORAGE_BUCKET_NAME="arpansahu-one-bucket"
AWS_S3_ADDRESSING_STYLE="path"
# Multiple Buckets Setup (add these)
AWS_STATIC_BUCKET_NAME="arpansahu-static"
AWS_MEDIA_BUCKET_NAME="arpansahu-media"
python manage.py collectstatic --noinput
from django.db import models
class Document(models.Model):
title = models.CharField(max_length=200)
# Uploads to media bucket (private)
file = models.FileField(upload_to='documents/')
created_at = models.DateTimeField(auto_now_add=True)
class UserProfile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
# Uploads to media bucket with presigned URL access
avatar = models.ImageField(upload_to='avatars/')
from django.core.files.storage import default_storage
# Generate temporary URL for private file
file_url = default_storage.url('documents/private_file.pdf')
# URL is valid for 1 hour (querystring_expire setting)
import boto3
from botocore.client import Config
# Initialize S3 client
s3 = boto3.client(
's3',
endpoint_url='https://minioapi.arpansahu.space',
aws_access_key_id='django_user',
aws_secret_access_key='Gandu302@djangominio',
config=Config(signature_version='s3v4'),
verify=True
)
# Upload file
s3.upload_file('local-file.txt', 'arpansahu-one-bucket', 'remote-file.txt')
# Upload with metadata
s3.upload_file(
'image.jpg',
'arpansahu-one-bucket',
'images/profile.jpg',
ExtraArgs={'ContentType': 'image/jpeg', 'ACL': 'public-read'}
)
# Download file
s3.download_file('arpansahu-one-bucket', 'remote-file.txt', 'downloaded.txt')
# List objects
response = s3.list_objects_v2(Bucket='arpansahu-one-bucket', Prefix='documents/')
for obj in response.get('Contents', []):
print(f"{obj['Key']} - {obj['Size']} bytes")
# Delete object
s3.delete_object(Bucket='arpansahu-one-bucket', Key='remote-file.txt')
# Generate presigned URL (temporary access)
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'arpansahu-one-bucket', 'Key': 'private-file.pdf'},
ExpiresIn=3600 # 1 hour
)
print(f"Temporary URL: {url}")
import os
def upload_directory(local_dir, bucket, s3_prefix=''):
for root, dirs, files in os.walk(local_dir):
for file in files:
local_path = os.path.join(root, file)
relative_path = os.path.relpath(local_path, local_dir)
s3_path = os.path.join(s3_prefix, relative_path).replace('\\', '/')
print(f"Uploading {local_path} to {s3_path}")
s3.upload_file(local_path, bucket, s3_path)
# Usage
upload_directory('/path/to/local/folder', 'arpansahu-one-bucket', 'backups/')
MinIO Client provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff.
# Linux/Mac
wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin/
# Mac (Homebrew)
brew install minio/stable/mc
# Add MinIO server as alias
mc alias set myminio https://minioapi.arpansahu.space django_user Gandu302@djangominio
# Test connection
mc admin info myminio
# List buckets
mc ls myminio
# List objects in bucket
mc ls myminio/arpansahu-one-bucket
# Copy file to bucket
mc cp local-file.txt myminio/arpansahu-one-bucket/
# Copy entire directory
mc cp --recursive local-folder/ myminio/arpansahu-one-bucket/folder/
# Mirror directory (sync)
mc mirror local-folder/ myminio/arpansahu-one-bucket/folder/
# Download file
mc cp myminio/arpansahu-one-bucket/file.txt ./downloaded.txt
# Remove file
mc rm myminio/arpansahu-one-bucket/file.txt
# Remove directory recursively
mc rm --recursive --force myminio/arpansahu-one-bucket/folder/
# Get file stats
mc stat myminio/arpansahu-one-bucket/file.txt
# Watch for events
mc watch myminio/arpansahu-one-bucket
# Create bucket
mc mb myminio/new-bucket
# Remove bucket
mc rb myminio/old-bucket
# Set bucket policy (public read)
mc anonymous set public myminio/arpansahu-one-bucket
# Set bucket policy (download only)
mc anonymous set download myminio/arpansahu-one-bucket
# Get bucket policy
mc anonymous get myminio/arpansahu-one-bucket
# View logs
docker logs -f minio
# Restart container
docker restart minio
# Stop container
docker stop minio
# Start container
docker start minio
# Remove container
docker rm -f minio
# Pull latest image
docker pull quay.io/minio/minio:latest
# Stop and remove old container
docker stop minio && docker rm minio
# Reinstall
cd "AWS Deployment/Minio"
./install.sh
# Backup all data
tar -czf minio-backup-$(date +%Y%m%d).tar.gz ~/minio/data
# Backup specific bucket (using mc)
mc mirror myminio/arpansahu-one-bucket ~/backups/bucket-backup/
# Restore from backup
mc mirror ~/backups/bucket-backup/ myminio/arpansahu-one-bucket/
# Server disk space
df -h ~/minio/data
# Bucket sizes (via mc)
mc du myminio/arpansahu-one-bucket
Symptoms: Console shows repeated errors:
WebSocket connection to 'wss://minio.arpansahu.space/ws/objectManager' failed
Cause: Missing WebSocket upgrade headers in nginx configuration.
Fix:
cd "AWS Deployment/Minio"
sudo ./fix-websocket.sh
This script adds required headers:
-
proxy_http_version 1.1
-
proxy_set_header Upgrade $http_upgrade
-
proxy_set_header Connection "upgrade"
After running, hard refresh browser (Ctrl+Shift+R or Cmd+Shift+R).
Manual Verification:
grep -A 5 "server_name minio.arpansahu.space" /etc/nginx/sites-enabled/services | grep Upgrade
Should show:
proxy_set_header Upgrade $http_upgrade;
# Check if container is running
docker ps | grep minio
# Check logs
docker logs minio
# Restart container
docker restart minio
# Test nginx configuration
sudo nginx -t
# Reload nginx
sudo systemctl reload nginx
# Check DNS resolution
nslookup minio.arpansahu.space
# Check nginx client_max_body_size
sudo grep -r "client_max_body_size" /etc/nginx/
# Should be 500M in MinIO configs
# If not, update and reload:
sudo vi /etc/nginx/sites-enabled/minio-console
sudo vi /etc/nginx/sites-enabled/minio-api
sudo systemctl reload nginx
# Check disk space
df -h ~/minio/data
# Verify ports are listening
sudo ss -lntp | grep -E '9000|9002'
# Test local access
curl http://localhost:9002 # Console
curl http://localhost:9000/minio/health/live # API health
# Check firewall
sudo ufw status
# Test from Mac/external
curl -I https://minio.arpansahu.space
curl -I https://minioapi.arpansahu.space
# Verify certificates exist
ls -la /etc/letsencrypt/live/arpansahu.space/
# Test SSL
openssl s_client -connect minio.arpansahu.space:443 -servername minio.arpansahu.space
# Check nginx SSL configuration
sudo nginx -T | grep -A 10 "minio.arpansahu.space"
# Test boto3 connection
import boto3
s3 = boto3.client(
's3',
endpoint_url='https://minioapi.arpansahu.space',
aws_access_key_id='django_user',
aws_secret_access_key='Gandu302@djangominio'
)
print(s3.list_buckets())
mc ls myminio
โ
DO:
- Use
Private
policy for user uploads, sensitive data, backups
- Use
Download
(public read) policy ONLY for static assets (CSS, JS, images)
- Create separate buckets for different access levels
- Use presigned URLs for temporary access to private files
- Enable bucket versioning for important data
โ DON'T:
- Never use
Public
(full read/write) policy - major security risk
- Don't store sensitive data in public buckets
- Don't share root credentials with applications
- Don't set static and media files in same bucket with public policy
| Content Type | Bucket Policy | Why |
|---|---|---|
| Static Files (CSS, JS, fonts, images) | Download (public read) | Need to be loaded by browsers without authentication |
| Media Uploads (user avatars, documents) | Private | Access controlled by Django, use presigned URLs |
| Database Backups | Private | Sensitive data, admin-only access |
| Public Assets (blog images, public downloads) | Download (public read) | Intentionally public content |
| Form Uploads (before processing) | Private | Temporary storage, should be access-controlled |
Create access keys with minimal required permissions:
For Django Application:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::arpansahu-media/*",
"arn:aws:s3:::arpansahu-static/*",
"arn:aws:s3:::arpansahu-media",
"arn:aws:s3:::arpansahu-static"
]
}
]
}
For Read-Only Access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::arpansahu-media/*",
"arn:aws:s3:::arpansahu-media"
]
}
]
}
This example shows a complete Django setup with MinIO + Redis (TLS) + PostgreSQL + Celery , all using domain names and secure SSL/TLS connections.
# Django
DEBUG=False
SECRET_KEY="your-secret-key"
ALLOWED_HOSTS=django-starter.arpansahu.space
# PostgreSQL (TLS stream proxy on port 9552)
DATABASE_URL=postgresql://postgres:your_postgres_password@postgres.arpansahu.space:9552/your_database_name
# Redis (TLS stream proxy on port 9551)
REDIS_CLOUD_URL=rediss://:your_redis_password@redis.arpansahu.space:9551
# MinIO S3 (API endpoint with HTTPS)
AWS_S3_ENDPOINT_URL=https://minioapi.arpansahu.space
AWS_S3_VERIFY=True
AWS_ACCESS_KEY_ID=your_minio_access_key
AWS_SECRET_ACCESS_KEY=your_minio_secret_key
AWS_STORAGE_BUCKET_NAME=your-bucket-name
AWS_S3_ADDRESSING_STYLE=path
import os
import ssl
from pathlib import Path
# Load environment variables
DEBUG = os.getenv('DEBUG', 'False') == 'True'
SECRET_KEY = os.getenv('SECRET_KEY')
ALLOWED_HOSTS = os.getenv('ALLOWED_HOSTS', '').split(',')
# Database Configuration
import dj_database_url
DATABASES = {
'default': dj_database_url.config(
default=os.getenv('DATABASE_URL'),
conn_max_age=600,
conn_health_checks=True,
)
}
# Redis Configuration
REDIS_CLOUD_URL = os.getenv('REDIS_CLOUD_URL')
# Cache with Redis SSL
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': REDIS_CLOUD_URL,
'OPTIONS': {
'CLIENT_CLASS': 'django_redis.client.DefaultClient',
'CONNECTION_POOL_KWARGS': {
'ssl_cert_reqs': ssl.CERT_REQUIRED # Verify Let's Encrypt cert
}
}
}
}
# Celery Configuration with Redis SSL
CELERY_BROKER_URL = REDIS_CLOUD_URL
CELERY_RESULT_BACKEND = REDIS_CLOUD_URL
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'UTC'
# Celery SSL Configuration (Let's Encrypt certificates are trusted)
CELERY_REDIS_BACKEND_USE_SSL = {
'ssl_cert_reqs': ssl.CERT_REQUIRED
}
CELERY_BROKER_USE_SSL = {
'ssl_cert_reqs': ssl.CERT_REQUIRED
}
# MinIO/S3 Configuration for Multi-Project Setup
AWS_S3_ENDPOINT_URL = os.getenv('AWS_S3_ENDPOINT_URL')
AWS_S3_VERIFY = os.getenv('AWS_S3_VERIFY', 'True') == 'True'
AWS_ACCESS_KEY_ID = os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.getenv('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.getenv('AWS_STORAGE_BUCKET_NAME')
AWS_S3_ADDRESSING_STYLE = 'path'
AWS_DEFAULT_ACL = None
# Custom domain for static files (portfolio prefix for multi-project)
PROJECT_NAME = 'django_starter' # Change per project
AWS_LOCATION = f'portfolio/{PROJECT_NAME}' # Project-specific prefix
AWS_S3_CUSTOM_DOMAIN = f'{AWS_S3_ENDPOINT_URL.replace("https://", "")}/{AWS_STORAGE_BUCKET_NAME}'
# Static files configuration
STATIC_URL = f'https://{AWS_S3_CUSTOM_DOMAIN}/{AWS_LOCATION}/static/'
STATIC_ROOT = BASE_DIR / 'staticfiles'
# Storage backends (Django 4.2+)
STORAGES = {
"default": {
"BACKEND": "storages.backends.s3boto3.S3Boto3Storage",
},
"staticfiles": {
"BACKEND": "storages.backends.s3boto3.S3StaticStorage",
},
}
# Additional S3 settings
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400', # Cache for 1 day
}
AWS_QUERYSTRING_AUTH = False # No signed URLs for static files
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'django_starter.settings')
app = Celery('django_starter')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
@app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
Django>=4.2
psycopg2-binary
dj-database-url
django-redis
redis
celery
django-storages
boto3
# Collect static files to MinIO
python manage.py collectstatic --noinput
# Run migrations
python manage.py migrate
# Start Django (production)
gunicorn django_starter.wsgi:application --bind 0.0.0.0:8016
# Start Celery worker
celery -A django_starter worker --loglevel=info
# Start Celery beat (if needed)
celery -A django_starter beat --loglevel=info
All these services must be configured on your server:
See individual service documentation:
-
PostgreSQL Setup
-
Redis Setup
-
MinIO Setup
# Test Redis connection
redis-cli -h redis.arpansahu.space -p 9551 -a your_redis_password --tls ping
# Should return: PONG
# Test MinIO access
curl -I https://minioapi.arpansahu.space/your-bucket-name/portfolio/your_project/static/admin/css/base.css
# Should return: HTTP/2 200
# Test static file serving
curl https://django-starter.arpansahu.space
# CSS/JS should load from minioapi.arpansahu.space
[2026-02-03 14:42:59,941: INFO/MainProcess] Connected to rediss://:**@redis.arpansahu.space:9551//
[2026-02-03 14:42:59,953: INFO/MainProcess] celery@django_starter ready.
| Issue | Cause | Solution |
|---|---|---|
| 400 Bad Request from boto3 | Using Console URL instead of API |
Use
minioapi.arpansahu.space
, not
minio.arpansahu.space
|
| CSS not loading | Files not uploaded to MinIO |
Run
collectstatic
, check MinIO bucket
|
| Redis SSL connection failed | Certificate verification issue |
Use
ssl.CERT_REQUIRED
(Let's Encrypt is trusted)
|
| Celery can't connect to Redis | Wrong URL or missing TLS proxy |
Use
rediss://
scheme, ensure nginx stream proxy on 9551
|
| collectstatic slow/fails | Network issues or wrong credentials | Check AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY |
After installation, verify everything works:
# 1. Check container status
docker ps | grep minio
# 2. Check local access
curl http://localhost:9002
curl http://localhost:9000/minio/health/live
# 3. Check HTTPS access
curl -I https://minio.arpansahu.space
curl -I https://minioapi.arpansahu.space
# 4. Login to console
# Visit: https://minio.arpansahu.space
# Login with: arpansahu / Gandu302@minio
# 5. Test with mc client
mc alias set test https://minioapi.arpansahu.space django_user Gandu302@djangominio
mc ls test
All checks should pass โ
All deployment files are in:
AWS Deployment/Minio/
| File | Purpose |
|---|---|
.env.example
|
Template for environment variables |
.env
|
Actual credentials (not in git) |
install.sh
|
Main installation script |
add-nginx-config.sh
|
Adds nginx reverse proxy config |
fix-websocket.sh
|
Fixes WebSocket connection issues |
nginx-console.conf
|
Standalone Console nginx config |
nginx-api.conf
|
Standalone API nginx config |
apply_minio_policy.sh
|
Automated bucket policy application script |
apply_policy.py
|
Python script for bucket policy management |
minio_bucket_policy.json.example
|
Template for bucket policy |
minio_bucket_policy.json
|
Actual bucket policy (not in git) |
README.md
|
This documentation |
On Server:
- Data:
~/minio/data/
- Nginx config:
/etc/nginx/sites-available/services
(merged config)
- Logs:
docker logs minio
[README of Elasticsearch Setup]
[README of Apache Kafka Setup]
[README of RabbitMQ Setup]
Jenkins is an open-source automation server that enables developers to build, test, and deploy applications through continuous integration and continuous delivery (CI/CD). This guide provides a complete, production-ready setup with Java 21, Jenkins LTS, Nginx reverse proxy, and comprehensive credential management.
Before installing Jenkins, ensure you have:
Internet (HTTPS)
โ
โโ Nginx (Port 443) - TLS Termination
โ
โโ jenkins.arpansahu.space
โ
โโ Jenkins (localhost:8080)
โ
โโ Jenkins Controller (Web UI + API)
โโ Build Agents (local/remote)
โโ Workspace (/var/lib/jenkins)
โโ Credentials Store
Key Principles:
- Jenkins runs on localhost only (port 8080)
- Nginx handles all TLS termination
- Credentials stored in Jenkins encrypted store
- Pipelines defined as code (Jenkinsfile)
- Docker-based builds for isolation
Advantages:
- Open-source and free
- Extensive plugin ecosystem (1800+)
- Pipeline as Code (Jenkinsfile)
- Distributed builds
- Docker integration
- GitHub/GitLab integration
- Email notifications
- Role-based access control
Use Cases:
- Automated builds on commit
- Automated testing
- Docker image building
- Deployment automation
- Scheduled jobs
- Integration with Harbor registry
- Multi-branch pipelines
Jenkins requires Java to run. We'll install OpenJDK 21 (latest LTS).
โ ๏ธ Important: Java 17 support ends March 31, 2026. Use Java 21 for continued support.
java -version
If you see Java 17 or older, follow the upgrade steps below.
If Jenkins is already installed on Java 17:
sudo apt update
sudo apt install -y openjdk-21-jdk
sudo systemctl status jenkins
sudo systemctl stop jenkins
sudo update-alternatives --config java
Select Java 21 from the list (e.g., `/usr/lib/jvm/java-21-openjdk-amd64/bin/java`)
java -version
Should show: `openjdk version "21.0.x"`
sudo nano /etc/default/jenkins
Add or update:
JAVA_HOME="/usr/lib/jvm/java-21-openjdk-amd64"
JENKINS_JAVA_CMD="$JAVA_HOME/bin/java"
sudo systemctl start jenkins
sudo systemctl status jenkins
Verify in Jenkins UI
Dashboard โ Manage Jenkins โ System Information โ Look for
java.version
(should be 21.x)
For new installations:
sudo apt update
sudo apt install -y openjdk-21-jdk
java -version
Expected output:
openjdk version "21.0.x" 2024-xx-xx
OpenJDK Runtime Environment (build 21.0.x+x)
OpenJDK 64-Bit Server VM (build 21.0.x+x, mixed mode, sharing)
sudo nano /etc/environment
Add:
JAVA_HOME="/usr/lib/jvm/java-21-openjdk-amd64"
Apply changes:
source /etc/environment
echo $JAVA_HOME
Jenkins Long-Term Support (LTS) releases are recommended for production environments. Current LTS: 2.528.3
# Modern keyring format (recommended)
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo gpg --dearmor -o /usr/share/keyrings/jenkins-archive-keyring.gpg
# Also add legacy key for repository compatibility
gpg --keyserver keyserver.ubuntu.com --recv-keys 7198F4B714ABFC68
gpg --export 7198F4B714ABFC68 > /tmp/jenkins-key.gpg
sudo gpg --dearmor < /tmp/jenkins-key.gpg > /usr/share/keyrings/jenkins-old-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/jenkins-old-keyring.gpg] https://pkg.jenkins.io/debian-stable binary/" | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt update
# Install latest LTS version
sudo apt install -y jenkins
# Or install specific LTS version
# sudo apt install -y jenkins=2.528.3
jenkins --version
Expected: `2.528.3` or newer LTS
sudo systemctl enable jenkins
sudo systemctl start jenkins
sudo systemctl status jenkins
Expected: Active (running)
sudo ss -tulnp | grep 8080
Expected: Jenkins listening on 127.0.0.1:8080
To upgrade an existing Jenkins installation:
jenkins --version
# Or via API:
curl -s -I https://jenkins.arpansahu.space/api/json | grep X-Jenkins
apt-cache policy jenkins | head -30
Note: Look for versions 2.xxx.x (LTS releases), not 2.5xx+ (weekly releases)
sudo tar -czf /tmp/jenkins-backup-$(date +%Y%m%d-%H%M%S).tar.gz /var/lib/jenkins/
sudo systemctl stop jenkins
sudo apt update
sudo apt install --only-upgrade jenkins -y
# Or install specific LTS version:
# sudo apt install jenkins=2.528.3 -y
sudo systemctl start jenkins
jenkins --version
sudo systemctl status jenkins
Check Jenkins UI
https://jenkins.arpansahu.space โ Manage Jenkins โ About Jenkins
sudo nano /etc/nginx/sites-available/services
# Jenkins CI/CD - HTTP โ HTTPS
server {
listen 80;
listen [::]:80;
server_name jenkins.arpansahu.space;
return 301 https://$host$request_uri;
}
# Jenkins CI/CD - HTTPS
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name jenkins.arpansahu.space;
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# Jenkins-specific timeouts
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
# Required for Jenkins CLI and agent connections
proxy_http_version 1.1;
proxy_request_buffering off;
}
}
sudo nginx -t
sudo systemctl reload nginx
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Copy this password (example: a1b2c3d4e5f6...)
Access Jenkins Web UI
Go to: https://jenkins.arpansahu.space
Enter initial admin password
Paste the password from step 1.
Install suggested plugins
Create admin user
Configure:
- Username:
admin
- Password: (your strong password)
- Full name:
Admin User
- Email: your-email@example.com
Click: Save and Continue
Configure Jenkins URL
Jenkins URL:
https://jenkins.arpansahu.space
Click: Save and Finish
Start using Jenkins
Click: Start using Jenkins
Jenkins stores credentials securely for use in pipelines. We'll configure 4 essential credentials.
Navigate to credentials
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure GitHub credentials
arpansahu
(your GitHub username)
ghp_xxxxxxxxxxxx
(GitHub Personal Access Token)
github-auth
Github Auth
Click: Create
Note: Generate GitHub PAT at https://github.com/settings/tokens with scopes: repo, admin:repo_hook
Add Harbor credentials
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure Harbor credentials
admin
(or robot account:
robot$ci-bot
)
harbor-credentials
harbor-credentials
Click: Create
Add Jenkins admin credentials
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure Jenkins API credentials
admin
(Jenkins admin username)
jenkins-admin-credentials
Jenkins admin credentials for API authentication and pipeline usage
Click: Create
Use case: Pipeline triggers, REST API calls, remote job execution
Add Sentry CLI token
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure Sentry credentials
sentry-auth-token
Sentry CLI Authentication Token
Click: Create
Use case: Sentry release tracking, source map uploads, error monitoring integration
Add GitHub credentials
Dashboard โ Manage Jenkins โ Credentials โ System โ Global credentials โ Add Credentials
Configure GitHub credentials
github_auth
GitHub authentication for branch merging and repository operations
Click: Create
How to generate GitHub PAT:
1. Go to GitHub โ Settings โ Developer settings โ Personal access tokens โ Tokens (classic)
2. Generate new token with permissions:
repo
(Full control of private repositories)
3. Copy token immediately (shown only once)
Use case: Automated branch merging, repository operations, deployment workflows
Global variables are available to all Jenkins pipelines.
Navigate to system configuration
Dashboard โ Manage Jenkins โ System
Scroll to Global properties
Check: Environment variables
Add global variables
Click: Add (for each variable)
| Name | Value | Description |
|---|---|---|
| MAIL_JET_API_KEY | (your Mailjet API key) | Email notification service |
| MAIL_JET_API_SECRET | (your Mailjet secret) | Email notification service |
| MAIL_JET_EMAIL_ADDRESS | noreply@arpansahu.space | Sender email address |
| MY_EMAIL_ADDRESS | your-email@example.com | Notification recipient |
Save configuration
Scroll down and click: Save
Jenkins needs Docker access to build containerized applications.
sudo usermod -aG docker jenkins
sudo systemctl restart jenkins
sudo -u jenkins docker ps
Expected: Docker container list (even if empty)
Required if pipelines need to copy files from protected directories.
sudo visudo
Add Jenkins sudo permissions
Add at end of file:
# Allow Jenkins to run specific commands without password
jenkins ALL=(ALL) NOPASSWD: /bin/cp, /bin/mkdir, /bin/chown
Or for full sudo access (less secure):
jenkins ALL=(ALL) NOPASSWD: ALL
Save and exit
In nano:
Ctrl + O
,
Enter
,
Ctrl + X
In vi:
Esc
,
:wq
,
Enter
Verify sudo access
sudo -u jenkins sudo -l
Each project needs its own Nginx configuration for deployment.
sudo nano /etc/nginx/sites-available/my-django-app
# Django App - HTTP โ HTTPS
server {
listen 80;
listen [::]:80;
server_name myapp.arpansahu.space;
return 301 https://$host$request_uri;
}
# Django App - HTTPS
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name myapp.arpansahu.space;
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
For Kubernetes deployment (alternative)
Replace
proxy_pass
line:
proxy_pass http://<CLUSTER_IP>:30080;
sudo ln -s /etc/nginx/sites-available/my-django-app /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Create
Jenkinsfile-build
in your project repository root.
Example Jenkinsfile-build:
pipeline {
agent { label 'local' }
environment {
HARBOR_URL = 'harbor.arpansahu.space'
HARBOR_PROJECT = 'library'
IMAGE_NAME = 'my-django-app'
IMAGE_TAG = "${env.BUILD_NUMBER}"
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build Docker Image') {
steps {
script {
docker.build("${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}")
}
}
}
stage('Push to Harbor') {
steps {
script {
docker.withRegistry("https://${HARBOR_URL}", 'harbor-credentials') {
docker.image("${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}").push()
docker.image("${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}").push('latest')
}
}
}
}
stage('Trigger Deploy') {
steps {
build job: 'my-django-app-deploy', wait: false
}
}
}
post {
success {
emailext(
subject: "Build Success: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: "Build completed successfully.",
to: "${env.MY_EMAIL_ADDRESS}"
)
}
failure {
emailext(
subject: "Build Failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: "Build failed. Check Jenkins console output.",
to: "${env.MY_EMAIL_ADDRESS}"
)
}
}
}
Create
Jenkinsfile-deploy
in your project repository root.
Example Jenkinsfile-deploy:
pipeline {
agent { label 'local' }
environment {
HARBOR_URL = 'harbor.arpansahu.space'
HARBOR_PROJECT = 'library'
IMAGE_NAME = 'my-django-app'
CONTAINER_NAME = 'my-django-app'
CONTAINER_PORT = '8000'
}
stages {
stage('Stop Old Container') {
steps {
script {
sh """
docker stop ${CONTAINER_NAME} || true
docker rm ${CONTAINER_NAME} || true
"""
}
}
}
stage('Pull Latest Image') {
steps {
script {
docker.withRegistry("https://${HARBOR_URL}", 'harbor-credentials') {
docker.image("${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:latest").pull()
}
}
}
}
stage('Deploy Container') {
steps {
script {
sh """
docker run -d \
--name ${CONTAINER_NAME} \
--restart unless-stopped \
-p ${CONTAINER_PORT}:8000 \
--env-file /var/lib/jenkins/.env/${IMAGE_NAME} \
${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:latest
"""
}
}
}
stage('Health Check') {
steps {
script {
sleep(time: 10, unit: 'SECONDS')
sh "curl -f http://localhost:${CONTAINER_PORT}/health || exit 1"
}
}
}
}
post {
success {
emailext(
subject: "Deploy Success: ${env.JOB_NAME}",
body: "Deployment completed successfully.",
to: "${env.MY_EMAIL_ADDRESS}"
)
}
failure {
emailext(
subject: "Deploy Failed: ${env.JOB_NAME}",
body: "Deployment failed. Check Jenkins console output.",
to: "${env.MY_EMAIL_ADDRESS}"
)
}
}
}
Create new pipeline
Dashboard โ New Item
Configure pipeline
my-django-app-build
Configure pipeline settings
Configure Pipeline definition
https://github.com/arpansahu/my-django-app.git
github-auth
*/build
Jenkinsfile-build
Save pipeline
Click: Save
Create new pipeline
Dashboard โ New Item
Configure pipeline
my-django-app-deploy
Configure pipeline settings
Configure Pipeline definition
https://github.com/arpansahu/my-django-app.git
github-auth
*/main
Jenkinsfile-deploy
Save pipeline
Click: Save
Store sensitive environment variables outside the repository.
sudo mkdir -p /var/lib/jenkins/.env
sudo chown jenkins:jenkins /var/lib/jenkins/.env
sudo nano /var/lib/jenkins/.env/my-django-app
# Django settings
SECRET_KEY=your-secret-key-here
DEBUG=False
ALLOWED_HOSTS=myapp.arpansahu.space
# Database
DATABASE_URL=postgresql://user:pass@db:5432/myapp
# Redis
REDIS_URL=redis://redis:6379/0
# Email
EMAIL_BACKEND=django.core.mail.backends.smtp.EmailBackend
EMAIL_HOST=smtp.mailjet.com
EMAIL_PORT=587
EMAIL_USE_TLS=True
EMAIL_HOST_USER=your-mailjet-api-key
EMAIL_HOST_PASSWORD=your-mailjet-secret
# Sentry
SENTRY_DSN=https://xxx@sentry.io/xxx
sudo chown jenkins:jenkins /var/lib/jenkins/.env/my-django-app
sudo chmod 600 /var/lib/jenkins/.env/my-django-app
Install Email Extension Plugin
Dashboard โ Manage Jenkins โ Plugins โ Available plugins
Search:
Email Extension Plugin
Click: Install
Configure SMTP settings
Dashboard โ Manage Jenkins โ System โ Extended E-mail Notification
Configure:
-
SMTP server
:
in-v3.mailjet.com
-
SMTP port
:
587
-
Use SMTP Authentication
: โ Checked
-
User Name
:
${MAIL_JET_API_KEY}
-
Password
:
${MAIL_JET_API_SECRET}
-
Use TLS
: โ Checked
-
Default user e-mail suffix
:
@arpansahu.space
Test email configuration
Click: Test configuration by sending test e-mail
Enter:
${MY_EMAIL_ADDRESS}
Expected: Email received
Save configuration
Click: Save
sudo systemctl status jenkins
sudo systemctl stop jenkins
sudo systemctl start jenkins
sudo systemctl restart jenkins
sudo journalctl -u jenkins -f
sudo tail -f /var/log/jenkins/jenkins.log
# Stop Jenkins
sudo systemctl stop jenkins
# Backup Jenkins home
sudo tar -czf jenkins-backup-$(date +%Y%m%d).tar.gz /var/lib/jenkins
# Start Jenkins
sudo systemctl start jenkins
sudo tar -czf jenkins-config-backup-$(date +%Y%m%d).tar.gz \
/var/lib/jenkins/config.xml \
/var/lib/jenkins/jobs/ \
/var/lib/jenkins/users/ \
/var/lib/jenkins/credentials.xml \
/var/lib/jenkins/secrets/
# Stop Jenkins
sudo systemctl stop jenkins
# Restore backup
sudo tar -xzf jenkins-backup-YYYYMMDD.tar.gz -C /
# Set ownership
sudo chown -R jenkins:jenkins /var/lib/jenkins
# Start Jenkins
sudo systemctl start jenkins
Jenkins not starting
Cause: Java not found or port conflict
Fix:
# Check Java installation
java -version
# Check if port 8080 is in use
sudo ss -tulnp | grep 8080
# Check Jenkins logs
sudo journalctl -u jenkins -n 50
Cannot push to Harbor from Jenkins
Cause: Docker credentials or network issue
Fix:
# Test Docker login as Jenkins user
sudo -u jenkins docker login harbor.arpansahu.space
# Check Jenkins can reach Harbor
sudo -u jenkins curl -I https://harbor.arpansahu.space
Pipeline fails with permission denied
Cause: Jenkins doesn't have Docker access
Fix:
# Add Jenkins to Docker group
sudo usermod -aG docker jenkins
# Restart Jenkins
sudo systemctl restart jenkins
# Verify
sudo -u jenkins docker ps
Email notifications not working
Cause: SMTP configuration incorrect
Fix:
GitHub webhook not triggering builds
Cause: Webhook not configured or firewall blocking
Fix:
# Verify Jenkins is accessible from internet
curl -I https://jenkins.arpansahu.space
# Configure GitHub webhook
# Repository โ Settings โ Webhooks โ Add webhook
# Payload URL: https://jenkins.arpansahu.space/github-webhook/
# Content type: application/json
# Events: Just the push event
Use HTTPS only
Strong authentication
# Enable security realm
Dashboard โ Manage Jenkins โ Security โ Security Realm
Select: Jenkins' own user database
Enable CSRF protection
Dashboard โ Manage Jenkins โ Security โ CSRF Protection
Check: Enable CSRF Protection
Limit build agent connections
Dashboard โ Manage Jenkins โ Security โ Agents
Set: Fixed port (50000) or disable
Use credentials store
Regular updates
# Check for Jenkins updates
Dashboard โ Manage Jenkins โ System Information
# Update Jenkins
sudo apt update
sudo apt upgrade jenkins
# Automate with cron
sudo crontab -e
Add:
0 2 * * * /usr/local/bin/backup-jenkins.sh
sudo nano /etc/default/jenkins
Add/modify:
JAVA_ARGS="-Xmx2048m -Xms1024m"
Restart Jenkins:
sudo systemctl restart jenkins
Clean old builds
Configure in project:
- Discard old builds
- Keep max 10 builds
- Keep builds for 7 days
Use build agents
Distribute builds across multiple machines instead of building everything on controller.
Check Jenkins system info
Dashboard โ Manage Jenkins โ System Information
Monitor disk usage
du -sh /var/lib/jenkins/*
Monitor build queue
Dashboard โ Build Queue (left sidebar)
View build history
Dashboard โ Build History (left sidebar)
Run these commands to verify Jenkins is working:
# Check Jenkins service
sudo systemctl status jenkins
# Check Java version
java -version
# Check port binding
sudo ss -tulnp | grep 8080
# Check Nginx config
sudo nginx -t
# Test HTTPS access
curl -I https://jenkins.arpansahu.space
# Verify Docker access
sudo -u jenkins docker ps
Then test in browser:
- Access: https://jenkins.arpansahu.space
- Login with admin credentials
- Verify all 4 credentials exist
- Create test pipeline
- Run manual build
- Check email notification received
After following this guide, you will have:
| Component | Value |
|---|---|
| Jenkins URL | https://jenkins.arpansahu.space |
| Jenkins Port | 8080 (localhost only) |
| Jenkins Home | /var/lib/jenkins |
| Java Version | OpenJDK 21 |
| Admin User | admin |
| Nginx Config | /etc/nginx/sites-available/services |
Internet (HTTPS)
โ
โโ Nginx (TLS Termination)
โ [Wildcard Certificate: *.arpansahu.space]
โ
โโ jenkins.arpansahu.space (Port 443 โ 8080)
โ
โโ Jenkins Controller
โ
โโ Credentials Store
โ โโ github-auth
โ โโ harbor-credentials
โ โโ jenkins-admin-credentials
โ โโ sentry-auth-token
โ
โโ Build Pipelines
โ โโ Jenkinsfile-build (Docker build + push)
โ โโ Jenkinsfile-deploy (Docker deploy)
โ
โโ Integration
โโ GitHub (webhooks)
โโ Harbor (registry)
โโ Docker (builds)
โโ Mailjet (notifications)
โโ Sentry (error tracking)
After setting up Jenkins:
My Jenkins instance: https://jenkins.arpansahu.space
For Harbor integration, see harbor.md documentation.
Harbor is an open-source container image registry that secures images with role-based access control, scans images for vulnerabilities, and signs images as trusted. It extends Docker Distribution by adding enterprise features like security, identity management, and image replication. This guide provides a complete, production-ready setup with Nginx reverse proxy.
Before installing Harbor, ensure you have:
Internet (HTTPS)
โ
โโ Nginx (Port 443) - TLS Termination
โ
โโ harbor.arpansahu.space
โ
โโ Harbor Internal Nginx (localhost:8080)
โ
โโ Harbor Core
โโ Harbor Registry
โโ Harbor Portal (Web UI)
โโ Trivy (Vulnerability Scanner)
โโ Notary (Image Signing)
โโ ChartMuseum (Helm Charts)
Key Principles:
- Harbor runs on localhost only
- System Nginx handles all external TLS
- Harbor has its own internal Nginx
- All data persisted in Docker volumes
- Automatic restart via systemd
Advantages:
- Role-based access control (RBAC)
- Vulnerability scanning with Trivy
- Image signing and trust (Notary)
- Helm chart repository
- Image replication
- Garbage collection
- Web UI for management
- Docker Hub proxy cache
Use Cases:
- Private Docker registry for organization
- Secure image storage
- Vulnerability assessment
- Compliance and auditing
- Multi-project isolation
- Image lifecycle management
cd /opt
sudo wget https://github.com/goharbor/harbor/releases/download/v2.11.0/harbor-offline-installer-v2.11.0.tgz
Check for latest version at: https://github.com/goharbor/harbor/releases
sudo tar -xzvf harbor-offline-installer-v2.11.0.tgz
cd harbor
ls -la
Expected files:
- harbor.yml.tmpl
- install.sh
- prepare
- common.sh
- harbor.*.tar.gz (images)
sudo cp harbor.yml.tmpl harbor.yml
sudo nano harbor.yml
Configure essential settings
Find and modify these lines:
# Hostname for Harbor
hostname: harbor.arpansahu.space
# HTTP settings (used for internal communication)
http:
port: 8080
# HTTPS settings (disabled - Nginx handles this)
# Comment out or remove the https section completely
# https:
# port: 443
# certificate: /path/to/cert
# private_key: /path/to/key
# Harbor admin password
harbor_admin_password: YourStrongPasswordHere
# Database settings (PostgreSQL)
database:
password: ChangeDatabasePassword
max_idle_conns: 100
max_open_conns: 900
# Data volume location
data_volume: /data
# Trivy (vulnerability scanner)
trivy:
ignore_unfixed: false
skip_update: false
offline_scan: false
insecure: false
# Job service
jobservice:
max_job_workers: 10
# Notification webhook job
notification:
webhook_job_max_retry: 3
# Log settings
log:
level: info
local:
rotate_count: 50
rotate_size: 200M
location: /var/log/harbor
Important changes:
- Set `hostname` to your domain
- Set `http.port` to 8080 (internal)
- Comment out entire `https` section
- Change `harbor_admin_password`
- Change `database.password`
- Keep `data_volume: /data` for persistence
Save and exit
In nano:
Ctrl + O
,
Enter
,
Ctrl + X
sudo ./install.sh --with-notary --with-trivy --with-chartmuseum
This will:
- Load Harbor Docker images
- Generate docker-compose.yml
- Create necessary directories
- Start all Harbor services
Installation takes 5-10 minutes depending on system.
sudo docker compose ps
Expected services (all should be "Up"):
- harbor-core
- harbor-db (PostgreSQL)
- harbor-jobservice
- harbor-log
- harbor-portal (Web UI)
- nginx (Harbor's internal)
- redis
- registry
- registryctl
- trivy-adapter
- notary-server
- notary-signer
- chartmuseum
sudo docker compose logs -f
Press `Ctrl + C` to exit logs.
sudo nano /etc/nginx/sites-available/services
# Harbor Registry - HTTP โ HTTPS
server {
listen 80;
listen [::]:80;
server_name harbor.arpansahu.space;
return 301 https://$host$request_uri;
}
# Harbor Registry - HTTPS
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name harbor.arpansahu.space;
ssl_certificate /etc/nginx/ssl/arpansahu.space/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/arpansahu.space/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
# Allow large image uploads (2GB recommended, 0 for unlimited)
# Note: Set to at least 2G for typical Docker images
client_max_body_size 2G;
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Timeouts for large image pushes
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
}
}
sudo nginx -t
sudo systemctl reload nginx
Harbor needs to start automatically after reboot. Docker Compose alone doesn't provide this.
sudo nano /etc/systemd/system/harbor.service
[Unit]
Description=Harbor Container Registry
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/opt/harbor
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable harbor
sudo systemctl status harbor
Expected: Loaded and active
# Allow HTTP/HTTPS (if not already allowed)
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
# Block direct access to Harbor port
sudo ufw deny 8080/tcp
# Reload firewall
sudo ufw reload
Configure router port forwarding
Access router admin: https://airtel.arpansahu.space (or http://192.168.1.1:81)
Add port forwarding rules:
| Service | External Port | Internal IP | Internal Port | Protocol |
|---|---|---|---|---|
| Harbor HTTP | 80 | 192.168.1.200 | 80 | TCP |
| Harbor HTTPS | 443 | 192.168.1.200 | 443 | TCP |
Note: Do NOT forward port 8080 (Harbor internal port).
sudo docker compose ps
All should show "Up" status.
curl -I http://127.0.0.1:8080
Expected: HTTP 200 or 301
curl -I https://harbor.arpansahu.space
Expected: HTTP 200
Access Harbor Web UI
Go to: https://harbor.arpansahu.space
Login with admin credentials
admin
Change admin password
Create project
library
(default) or custom name
Create robot account for CI/CD
ci-bot
docker login harbor.arpansahu.space
Enter:
- Username: `admin` (or your username)
- Password: (your Harbor password)
Expected: Login Succeeded
docker login harbor.arpansahu.space -u robot$ci-bot -p YOUR_ROBOT_TOKEN
docker tag nginx:latest harbor.arpansahu.space/library/nginx:latest
Format: `harbor.domain.com/project/image:tag`
docker push harbor.arpansahu.space/library/nginx:latest
Verify in Harbor UI
docker pull harbor.arpansahu.space/library/nginx:latest
services:
web:
image: harbor.arpansahu.space/library/nginx:latest
Retention policies automatically delete old images to save space.
Navigate to project
Add retention rule
Click: Add Rule
Configure:
-
Repositories
: matching
**
(all repositories)
-
By artifact count
: Retain the most recently pulled
3
artifacts
-
Tags
: matching
**
(all tags)
-
Untagged artifacts
: โ Checked (delete untagged)
This keeps last 3 pulled images and deletes others.
Schedule retention policy
Click: Add Retention Rule โ Schedule
Configure schedule:
-
Type
: Daily / Weekly / Monthly
-
Time
: 02:00 AM (off-peak)
-
Cron
:
0 2 * * *
(2 AM daily)
Click: Save
Test retention policy
Click: Dry Run
This shows what would be deleted without actually deleting.
Harbor uses Trivy to scan images for vulnerabilities.
Configure automatic scanning
Manual scan existing image
View scan results
Set CVE allowlist (optional)
sudo systemctl status harbor
sudo systemctl stop harbor
or
cd /opt/harbor
sudo docker compose down
sudo systemctl start harbor
or
cd /opt/harbor
sudo docker compose up -d
sudo systemctl restart harbor
cd /opt/harbor
sudo docker compose logs -f
sudo docker compose logs -f harbor-core
# Stop Harbor
sudo systemctl stop harbor
# Backup data directory
sudo tar -czf harbor-data-backup-$(date +%Y%m%d).tar.gz /data
# Backup configuration
sudo cp /opt/harbor/harbor.yml /backup/harbor-config-$(date +%Y%m%d).yml
# Backup database
sudo docker exec harbor-db pg_dumpall -U postgres > harbor-db-backup-$(date +%Y%m%d).sql
# Start Harbor
sudo systemctl start harbor
# Stop Harbor
sudo systemctl stop harbor
# Restore data directory
sudo tar -xzf harbor-data-backup-YYYYMMDD.tar.gz -C /
# Restore configuration
sudo cp /backup/harbor-config-YYYYMMDD.yml /opt/harbor/harbor.yml
# Restore database
sudo docker exec -i harbor-db psql -U postgres < harbor-db-backup-YYYYMMDD.sql
# Start Harbor
sudo systemctl start harbor
Harbor containers not starting
Cause: Port conflict or insufficient resources
Fix:
# Check if port 8080 is in use
sudo ss -tulnp | grep 8080
# Check Docker logs
cd /opt/harbor
sudo docker compose logs
# Check system resources
free -h
df -h
Cannot login to Harbor
Cause: Wrong credentials or database issue
Fix:
cd /opt/harbor
sudo docker compose exec harbor-core harbor-core password-reset
Image push fails
Cause: Storage full or permission issues
Fix:
# Check disk space
df -h /data
# Check Harbor logs
sudo docker compose logs -f registry
# Check data directory permissions
sudo ls -la /data
SSL certificate errors
Cause: Nginx certificate misconfigured
Fix:
# Verify certificate
openssl x509 -in /etc/nginx/ssl/arpansahu.space/fullchain.pem -noout -dates
# Check Nginx configuration
sudo nginx -t
# Reload Nginx
sudo systemctl reload nginx
Vulnerability scanning not working
Cause: Trivy adapter not running or internet connectivity
Fix:
# Check Trivy adapter
sudo docker compose ps trivy-adapter
# Check Trivy logs
sudo docker compose logs trivy-adapter
# Update Trivy database manually
sudo docker compose exec trivy-adapter /home/scanner/trivy --download-db-only
Use strong passwords
Enable HTTPS only
Implement RBAC
Enable vulnerability scanning
Configure image retention
Regular backups
# Automate with cron
sudo crontab -e
Add:
0 2 * * * /usr/local/bin/backup-harbor.sh
# Regular log review
sudo docker compose logs --since 24h | grep ERROR
Configure garbage collection
Optimize database
# Run vacuum on PostgreSQL
sudo docker compose exec harbor-db vacuumdb -U postgres -d registry
Configure resource limits
Edit docker-compose.yml (auto-generated):
services:
registry:
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 512M
Enable Redis cache
Harbor uses Redis by default for caching.
Increase Redis memory if needed.
curl -k https://harbor.arpansahu.space/api/v2.0/health
sudo docker stats
du -sh /data/*
sudo journalctl -u harbor -f
Backup current installation
Follow backup procedure above.
Download new Harbor version
cd /opt
sudo wget https://github.com/goharbor/harbor/releases/download/vX.Y.Z/harbor-offline-installer-vX.Y.Z.tgz
sudo systemctl stop harbor
sudo tar -xzvf harbor-offline-installer-vX.Y.Z.tgz
sudo mv harbor harbor-old
sudo mv harbor-new harbor
sudo cp harbor-old/harbor.yml harbor/harbor.yml
cd /opt/harbor
sudo ./install.sh --with-notary --with-trivy --with-chartmuseum
sudo systemctl start harbor
Run these commands to verify Harbor is working:
# Check all containers
sudo docker compose ps
# Check systemd service
sudo systemctl status harbor
# Check local access
curl -I http://127.0.0.1:8080
# Check HTTPS access
curl -I https://harbor.arpansahu.space
# Check Nginx config
sudo nginx -t
# Check firewall
sudo ufw status | grep -E '(80|443)'
# Test Docker login
docker login harbor.arpansahu.space
Then test in browser:
- Access: https://harbor.arpansahu.space
- Login with admin credentials
- Create test project
- Push test image
- Scan image for vulnerabilities
- Verify retention policy configured
After following this guide, you will have:
| Component | Value |
|---|---|
| Harbor URL | https://harbor.arpansahu.space |
| Internal Port | 8080 (localhost only) |
| Admin User | admin |
| Default Project | library |
| Data Directory | /data |
| Config File | /opt/harbor/harbor.yml |
| Service File | /etc/systemd/system/harbor.service |
Internet (HTTPS)
โ
โโ Nginx (TLS Termination)
โ [Wildcard Certificate: *.arpansahu.space]
โ
โโ harbor.arpansahu.space (Port 443 โ 8080)
โ
โโ Harbor Stack (Docker Compose)
โโ Harbor Core (API + Logic)
โโ Harbor Portal (Web UI)
โโ Registry (Image Storage)
โโ PostgreSQL (Metadata)
โโ Redis (Cache)
โโ Trivy (Vulnerability Scanner)
โโ Notary (Image Signing)
โโ ChartMuseum (Helm Charts)
Symptom:
Docker push fails with
413 Request Entity Too Large
when pushing large images.
Cause:
Nginx
client_max_body_size
limit is too small (default is 1MB).
Solution:
sudo nano /etc/nginx/sites-available/services
location / {
client_max_body_size 2G; # Adjust as needed
proxy_pass http://127.0.0.1:8080;
# ... rest of config
}
sudo nginx -t
sudo systemctl reload nginx
Note:
Harbor's internal nginx is already set to
client_max_body_size 0;
(unlimited) in its
/etc/nginx/nginx.conf
, so you only need to fix the external/system nginx configuration at
/etc/nginx/sites-available/services
.
Verify Harbor's internal nginx (optional):
docker exec nginx cat /etc/nginx/nginx.conf | grep client_max_body_size
# Should show: client_max_body_size 0;
Check these:
# 1. Is Harbor running?
sudo systemctl status harbor
docker ps | grep harbor
# 2. Is nginx running?
sudo systemctl status nginx
# 3. Check logs
sudo journalctl -u harbor -n 50
docker logs nginx
# Reset admin password
cd /opt/harbor
sudo docker-compose stop
sudo ./prepare
sudo docker-compose up -d
# Check disk usage
df -h /data
# Run garbage collection
docker exec harbor-core harbor-gc
# Or via UI: Administration โ Garbage Collection โ Run Now
Check nginx configuration for these settings:
proxy_buffering off;
proxy_request_buffering off;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
After setting up Harbor:
My Harbor instance: https://harbor.arpansahu.space
For CI/CD integration, see Jenkins documentation.
| Service | Purpose | Configuration |
|---|---|---|
| Mailjet | Transactional email delivery (activation, welcome, password reset, social connected) |
MAIL_JET_API_KEY
,
MAIL_JET_API_SECRET
|
| Sentry | Error tracking and performance monitoring |
SENTRY_DSH_URL
,
SENTRY_ENVIRONMENT
|
| Google OAuth | Social authentication |
GOOGLE_CLIENT_ID
,
GOOGLE_CLIENT_SECRET
|
| GitHub OAuth | Social authentication |
GITHUB_CLIENT_ID
,
GITHUB_CLIENT_SECRET
|
| Facebook OAuth | Social authentication |
FACEBOOK_APP_ID
,
FACEBOOK_APP_SECRET
|
| Twitter/X OAuth 2.0 | Social authentication |
TWITTER_API_KEY
,
TWITTER_API_SECRET
|
| LinkedIn OpenID Connect | Social authentication |
LINKEDIN_CLIENT_ID
,
LINKEDIN_CLIENT_SECRET
|
This project monitors the uptime of specified websites and sends an email alert if any website is down or returns a non-2xx status code. The project uses a shell script to set up a virtual environment, install dependencies, run the monitoring script, and then clean up the virtual environment.
git clone https://github.com/yourusername/website-uptime-monitor.git
cd website-uptime-monitor
.env
File
Create a
.env
file in the root directory and add the following content:
MAILJET_API_KEY=your_mailjet_api_key
MAILJET_SECRET_KEY=your_mailjet_secret_key
SENDER_EMAIL=your_sender_email@example.com
RECEIVER_EMAIL=your_receiver_email@example.com
To run the script manually, give permissions and execute:
chmod +x ./setup_and_run.sh
./setup_and_run.sh
chmod +x ./docker_cleanup_mail.sh
./docker_cleanup_mail.sh
To run the script automatically at regular intervals, set up a cron job:
crontab -e
0 */5 * * * /bin/bash /root/arpansahu-one-scripts/setup_and_run.sh >> /root/logs/website_up_time.log 2>&1
0 0 * * * export MAILJET_API_KEY="MAILJET_API_KEY" && export MAILJET_SECRET_KEY="MAILJET_SECRET_KEY" && export SENDER_EMAIL="SENDER_EMAIL" && export RECEIVER_EMAIL="RECEIVER_EMAIL" && /usr/bin/docker system prune -af --volumes > /root/logs/docker_prune.log 2>&1 && /root/arpansahu-one-scripts/docker_cleanup_mail.sh
pipeline {
agent { label 'local' }
environment {
ENV_PROJECT_NAME = "arpansahu_one_scripts"
}
stages {
stage('Initialize') {
steps {
script {
echo "Current workspace path is: ${env.WORKSPACE}"
}
}
}
stage('Checkout') {
steps {
checkout scm
}
}
}
post {
success {
script {
// Retrieve the latest commit message
def commitMessage = sh(script: "git log -1 --pretty=%B", returnStdout: true).trim()
if (currentBuild.description == 'DEPLOYMENT_EXECUTED') {
sh """curl -s \
-X POST \
--user $MAIL_JET_API_KEY:$MAIL_JET_API_SECRET \
https://api.mailjet.com/v3.1/send \
-H "Content-Type:application/json" \
-d '{
"Messages":[
{
"From": {
"Email": "$MAIL_JET_EMAIL_ADDRESS",
"Name": "ArpanSahuOne Jenkins Notification"
},
"To": [
{
"Email": "$MY_EMAIL_ADDRESS",
"Name": "Development Team"
}
],
"Subject": "Jenkins Build Pipeline your project ${currentBuild.fullDisplayName} Ran Successfully",
"TextPart": "Hola Development Team, your project ${currentBuild.fullDisplayName} is now deployed",
"HTMLPart": "<h3>Hola Development Team, your project ${currentBuild.fullDisplayName} is now deployed </h3> <br> <p> Build Url: ${env.BUILD_URL} </p>"
}
]
}'"""
}
// Trigger the common_readme job for all repositories"
build job: 'common_readme', parameters: [string(name: 'environment', value: 'prod')], wait: false
}
}
failure {
sh """curl -s \
-X POST \
--user $MAIL_JET_API_KEY:$MAIL_JET_API_SECRET \
https://api.mailjet.com/v3.1/send \
-H "Content-Type:application/json" \
-d '{
"Messages":[
{
"From": {
"Email": "$MAIL_JET_EMAIL_ADDRESS",
"Name": "ArpanSahuOne Jenkins Notification"
},
"To": [
{
"Email": "$MY_EMAIL_ADDRESS",
"Name": "Developer Team"
}
],
"Subject": "Jenkins Build Pipeline your project ${currentBuild.fullDisplayName} Ran Failed",
"TextPart": "Hola Development Team, your project ${currentBuild.fullDisplayName} deployment failed",
"HTMLPart": "<h3>Hola Development Team, your project ${currentBuild.fullDisplayName} is not deployed, Build Failed </h3> <br> <p> Build Url: ${env.BUILD_URL} </p>"
}
]
}'"""
}
}
}
Note: agent {label 'local'} is used to specify which node will execute the jenkins job deployment. So local linux server is labelled with 'local' are the project with this label will be executed in local machine node.
Make sure to use Pipeline project and name it whatever you want I have named it as per great_chat
In this above picture you can see credentials right? you can add your github credentials and harbor credentials use harbor-credentials as id for harbor credentials.
from Manage Jenkins on home Page --> Manage Credentials
and add your GitHub credentials from there
sudo vi /var/lib/jenkins/workspace/arpansahu_one_script/.env
Your workspace name may be different.
Add all the env variables as required and mentioned in the Readme File.
Add Global Jenkins Variables from Dashboard --> Manage --> Jenkins
Configure System
MAIL_JET_API_KEY
Now you are good to go.
This project is licensed under the MIT License. See the LICENSE file for details.
To run this project, you will need to add the following environment variables to your .env file
SECRET_KEY=
DEBUG=False
ALLOWED_HOSTS=
MAIL_JET_API_KEY=
MAIL_JET_API_SECRET=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_STORAGE_BUCKET_NAME=
BUCKET_TYPE=
USE_S3=True
DOMAIN=
PROTOCOL=
DATABASE_URL=
REDIS_CLOUD_URL=
SENTRY_ENVIRONMENT=
SENTRY_DSH_URL=
HARBOR_USERNAME=
HARBOR_PASSWORD=
HARBOR_URL=https://harbor.example.com
MY_EMAIL_ADDRESS=
FLOWER_ADMIN_USERNAME=
FLOWER_ADMIN_PASS=
RABBITMQ_HOST=
RABBITMQ_PORT=5672
RABBITMQ_USER=
RABBITMQ_PASSWORD=
RABBITMQ_VHOST=/
RABBITMQ_MANAGEMENT_PORT=15672
KAFKA_BOOTSTRAP_SERVERS=
KAFKA_SECURITY_PROTOCOL=SASL_SSL
KAFKA_SASL_MECHANISM=PLAIN
KAFKA_SASL_USERNAME=
KAFKA_SASL_PASSWORD=
KAFKA_SSL_TRUSTSTORE_PASSWORD=
KAFKA_SSL_KEYSTORE_PASSWORD=
ELASTICSEARCH_HOST=
ELASTICSEARCH_USER=
ELASTICSEARCH_PASSWORD=
ELASTICSEARCH_INDEX_PREFIX=django_starter
GOOGLE_CLIENT_ID=
GOOGLE_CLIENT_SECRET=
GITHUB_CLIENT_ID=
GITHUB_CLIENT_SECRET=
FACEBOOK_APP_ID=
FACEBOOK_APP_SECRET=
TWITTER_API_KEY=
TWITTER_API_SECRET=
LINKEDIN_CLIENT_ID=
LINKEDIN_CLIENT_SECRET=