2026, Jan 10 19:00
Troubleshooting Celery workers failing to connect to Redis in Docker: fix requirepass 'auth' errors caused by a 6379/6380 port mismatch
Celery workers failing with Redis AUTH/connection errors in Docker? Likely a port mismatch. Map 6379 correctly and set broker_url and result_backend to resolve.
Moving a Celery stack from a local Redis without auth to a Dockerized Redis secured with requirepass looks straightforward, until workers start throwing authentication and connection errors. The setup below illustrates how a subtle port mismatch can break broker and backend connectivity even when the password is correct and redis-cli confirms access.
Problem setup
The application points Celery’s broker and backend to Redis running in a container and protects it with a password. Password handling is fine, tasks are discovered, yet workers fail to connect.
import os
from celery import Celery
from dotenv import load_dotenv
load_dotenv()
# Redis password (hardcoded for demonstration only)
secret_key = "PASSWORD"
# Debug
print(f"Redis password: {secret_key}")
celery_hub = Celery(
'connectors',
broker=f'redis://:{secret_key}@localhost:6380/0',
backend=f'redis://:{secret_key}@localhost:6380/0',
include=['connectors.tasks.cricket_tasks']
)
celery_hub.config_from_object('connectors.tasks.celeryconfig')
if __name__ == '__main__':
celery_hub.start()from celery.schedules import crontab
from dotenv import load_dotenv
load_dotenv()
token_for_redis = "PASSWORD"
# Debug
print(f"config password: {token_for_redis}")
broker_url = f'redis://:{token_for_redis}@localhost:6380/0'
result_backend = f'redis://:{token_for_redis}@localhost:6380/0'
accept_content = ['json']
result_accept_content = ['json']
task_serializer = 'json'
enable_utc = False
timezone = 'Asia/Kolkata'
task_time_limit = 300
task_annotations = {
'*': {'rate_limit': '20/s'},
'tasks.add': {'rate_limit': '10/s', 'time_limit': 60},
}
beat_schedule = {
'run-daily-match-scheduler': {
'task': 'connectors.tasks.cricket_tasks.run_match_scraper',
'schedule': crontab(hour=8, minute=0),
},
'run-daily-table': {
'task': 'connectors.tasks.cricket_tasks.schedule_today_table',
'schedule': crontab(minute=30, hour=23),
},
'run-daily-mvp': {
'task': 'connectors.tasks.cricket_tasks.schedule_today_mvp',
'schedule': crontab(minute=30, hour=23),
},
'run-daily-scorecard': {
'task': 'connectors.tasks.cricket_tasks.schedule_today_scorecard',
'schedule': crontab(minute=30, hour=23),
},
'run-daily-btb': {
'task': 'connectors.tasks.cricket_tasks.schedule_today_btb',
'schedule': crontab(minute=30, hour=23),
},
}services:
qdrant:
image: qdrant/qdrant:latest
restart: always
container_name: qdrant
ports:
- 6333:6333
- 6334:6334
expose:
- 6333
- 6334
- 6335
configs:
- source: qdrant_config
target: /qdrant/config/production.yaml
volumes:
- ./qdrant_data:/qdrant/storage
redis:
image: redis:latest
hostname: redis
ports:
- "6379:6380"
command: ["redis-server", "--requirepass", "PASSWORD"]
configs:
qdrant_config:
content: |
log_level: INFOWorkers start up, print the password, and list tasks, but immediately loop on connection errors.
consumer: Cannot connect to redis://localhost:6379/0: Connection closed by server..
Direct checks inside the container succeed, including AUTH and PING, confirming the password and Redis instance are fine.
Why the authentication keeps failing
The issue is not the password. It is the port selection. Redis in the container listens on its default 6379. The compose file binds ports as 6379:6380, while the Celery configuration points at localhost:6380. This mismatch sends Celery to the wrong place and results in connection failures that look like auth or server-side drops.
The fix
Point Celery to host port 6379 instead of 6380 so it can reach Redis correctly. Update the broker and backend URLs accordingly.
from celery import Celery
from dotenv import load_dotenv
load_dotenv()
redis_secret = "PASSWORD"
print(f"Redis password: {redis_secret}")
celery_node = Celery(
'connectors',
broker=f'redis://:{redis_secret}@localhost:6379/0',
backend=f'redis://:{redis_secret}@localhost:6379/0',
include=['connectors.tasks.cricket_tasks']
)
celery_node.config_from_object('connectors.tasks.celeryconfig')
if __name__ == '__main__':
celery_node.start()If the configuration is loaded from a settings module, bring it in line as well.
from celery.schedules import crontab
from dotenv import load_dotenv
load_dotenv()
redis_key = "PASSWORD"
print(f"config password: {redis_key}")
broker_url = f'redis://:{redis_key}@localhost:6379/0'
result_backend = f'redis://:{redis_key}@localhost:6379/0'
accept_content = ['json']
result_accept_content = ['json']
task_serializer = 'json'
enable_utc = False
timezone = 'Asia/Kolkata'
task_time_limit = 300
task_annotations = {
'*': {'rate_limit': '20/s'},
'tasks.add': {'rate_limit': '10/s', 'time_limit': 60},
}
beat_schedule = {
'run-daily-match-scheduler': {
'task': 'connectors.tasks.cricket_tasks.run_match_scraper',
'schedule': crontab(hour=8, minute=0),
},
'run-daily-table': {
'task': 'connectors.tasks.cricket_tasks.schedule_today_table',
'schedule': crontab(minute=30, hour=23),
},
'run-daily-mvp': {
'task': 'connectors.tasks.cricket_tasks.schedule_today_mvp',
'schedule': crontab(minute=30, hour=23),
},
'run-daily-scorecard': {
'task': 'connectors.tasks.cricket_tasks.schedule_today_scorecard',
'schedule': crontab(minute=30, hour=23),
},
'run-daily-btb': {
'task': 'connectors.tasks.cricket_tasks.schedule_today_btb',
'schedule': crontab(minute=30, hour=23),
},
}When orchestrating services, ensure startup order accounts for Redis being available before workers start. Add the following to your compose file where relevant:
depends_on: redis
Why this matters
Broker and backend URLs are deceptively simple strings, yet a single digit in the port turns an otherwise healthy Redis into a black box for Celery. Because both authentication and connectivity are encoded in the same URI, mistakes often surface as auth-like failures even when credentials are correct. Getting the port right removes the ambiguity and restores stable task processing.
Conclusion
Keep Celery’s broker_url and result_backend aligned with the actual host port that maps to Redis inside the container. If Redis uses the default 6379 in the container, target the host port that correctly forwards to it, and declare service order with depends_on when needed. With these adjustments, Celery workers will authenticate and connect as expected.