Imagine yourself, a developer, late at night, staring at your screen after finally getting a geospatial web application (built with GeoDjango) to run smoothly on your local machine. Layers load correctly, shapefiles upload without a glitch, and the maps dance beautifully in the browser. Victory? Not quite. Because the next question is inevitable.
How do we take this local GeoDjango project and make it live on the internet so that others can use it?
That’s where deployment comes in and, for many, Heroku has become the stepping stone between a local proof-of-concept and a production-ready, cloud-hosted geospatial application. In this post, we’ll walk through the deployment process of a GeoDjango application on Heroku, highlighting lessons learned and practical steps you can follow to host your own project. This arrticle is a bit personal as this comes from a previous experience with this process while deploying, debugging and testing out the newly revamped Zimbabwe Geoportal.
Why Heroku?
Heroku is a platform-as-a-service (PaaS) that abstracts away server headaches. You don’t have to manage Linux packages, Nginx, or Postgres tuning manually, it gives you a clean interface to deploy straight from GitHub.
PS: You can also deploy using the Heroku CLI but I prefer the GitHub route which this article will mainly focus on as well.
For GeoDjango developers, Heroku offers:
- Quick deployment pipelines.
- Managed Postgres with PostGIS enabled (as an add-on).
- Buildpacks to install GDAL, GEOS, and PROJ, which are critical for geospatial functions.
Step 1: Prepare Your GeoDjango Project
Before deployment, make sure your local project is structured properly. At a minimum, you’ll need the following files with the exact naming as illustrated and the contents in each file also indicated. Just replace myproject with the name of your project according to Django.
Procfile
release: python manage.py migrate
web: gunicorn myproject.wsgi --log-file -
.python-version
3.12
PS: Ensure to use the python version that is more stable and that you have used for development. Only indicate the major version and not the lite patch fixes.
requirements.txt
Django>=5.0,<6
gunicorn
dj-database-url
psycopg2-binary
whitenoise
django-storages[boto3]
boto3
Plus any geospatial libraries you use: shapely, fiona, rasterio, pyproj.
Static file handling
In settings.py:
STATIC_URL = "/static/"
STATIC_ROOT = BASE_DIR / "staticfiles"
STORAGES = {
"staticfiles": {
"BACKEND": "whitenoise.storage.CompressedManifestStaticFilesStorage"
}
}
WhiteNoise serves static files efficiently inside Heroku.
Step 2: Handle Geospatial Dependencies
GeoDjango relies on GDAL, GEOS, and PROJ. These libraries are not natively available on vanilla Heroku, but the community has provided a solution:
- Add the Heroku Geo Buildpack inside your Heroku app settings:
https://github.com/heroku/heroku-geo-buildpack
- Make sure it comes before the Python buildpack so that GDAL is available when Python packages compile.
Step 3: Database with PostGIS
Heroku Postgres is powerful enough, but you need to enable PostGIS.
- Add the Postgres add-on in the Resources tab.
- Run this once:
CREATE EXTENSION IF NOT EXISTS postgis;
With dj-database-url, your settings.py will pick up the DATABASE_URL automatically. Ensure you have this code under your database access settings.
import dj_database_url
DATABASES = {
"default": dj_database_url.config(
conn_max_age=600,
ssl_require=True,
engine="django.contrib.gis.db.backends.postgis"
)
}
Step 4: Media Storage — Beyond Ephemeral Disks
One of the biggest lessons: Heroku’s filesystem is ephemeral. Any uploaded shapefile, GeoTIFF, or dataset disappears after a dyno restart.
The solution? Use external object storage.
Popular options:
- Cloudflare R2 (S3-compatible, zero egress fees).
- AWS S3 (well-established).
- DigitalOcean Spaces (predictable pricing).
For R2, configure in your settings.py file:
DEFAULT_FILE_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"
AWS_STORAGE_BUCKET_NAME = "my-bucket"
AWS_S3_ENDPOINT_URL = f"https://<accountid>.r2.cloudflarestorage.com"
AWS_S3_CUSTOM_DOMAIN = "pub-xxxxxx.r2.dev"
AWS_ACCESS_KEY_ID = "<your-access-key>"
AWS_SECRET_ACCESS_KEY = "<your-secret-key>"
AWS_QUERYSTRING_AUTH = False
AWS_S3_FILE_OVERWRITE = False
This ensures uploaded datasets remain safe and publicly accessible.
Step 5: Config Vars and Security
Set environment variables in Heroku’s dashboard:
SECRET_KEY= (long random string)DEBUG= FalseALLOWED_HOSTS=yourapp.herokuapp.com, yourdomain.org- Storage keys (
AWS_*) as needed.
Also, add:
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
Step 6: Deploy via GitHub
You don’t even need the CLI. In the Heroku dashboard:
- Go to Deploy tab.
- Connect GitHub repository. and Search for the repository name
- Select branch → Deploy.
On successful build, migrations will run automatically (via the release: line in your Procfile).
Step 7: Custom Error Pages and Logging
When DEBUG=False, Django hides stack traces. A neat trick is to implement a custom 500 page with an Error ID. Users see a friendly message, and your logs capture the trace for admins to debug.
This avoids exposing sensitive details while keeping errors traceable.
Lessons Learned
- Don’t rely on Heroku’s filesystem. Always use cloud storage for datasets.
- Make file names unique (append UUIDs or timestamps) to prevent overwrites.
- Test with DEBUG=False locally before deploying, so you see the real behavior.
- Error handling matters: a good 500 page saves headaches and keeps users confident.
- Automation helps: letting Heroku run
migrateautomatically means fewer forgotten steps.
Deploying a GeoDjango app to Heroku isn’t just about making code run in the cloud; it’s about bridging local development with real-world usage. With the right buildpacks, database setup, and storage strategy, your geospatial applications can scale beyond your laptop and into the hands of users everywhere and anywhere.
So the next time you find yourself celebrating a working local map app, don’t stop there — package it, push it, and let the world explore it through Heroku.
Kumbirai is a GIS & MEAL specialist using geospatial analytics to advance global health and social impact. A certified Data Protection Officer (DPO), an open-data advocate and self-taught software developer, he builds web GIS tools that turn field data into decisions. He lectures in GIS/Remote Sensing and mentors emerging practitioners. Founder of a geospatial startup and nonprofit, he believes, “Real geospatial innovation happens when we empower communities with the right tools and knowledge.” Open to consulting and collaborations.
