2025, Nov 21 03:00

Overpass API memory errors with [date] in OSMnx: reproducible case, explanation, and the fix

Learn why Overpass API queries with the [date] parameter in OSMnx hit memory limits and how to fix them by increasing overpass_memory. Includes repro and tips.

When you add a historical snapshot to an Overpass API query via OSMnx, a seemingly harmless setting can push the server over its memory limit. If you are filtering by tags like highway=residential and the request consistently fails only when you include a [date] parameter, you are running into how Overpass evaluates dated queries, not into an OSMnx bug.

Repro case

The following snippet requests residential roads inside a WKT polygon at a specific point in time. It uses a longer timeout and verbose logging. With the [date] added to overpass_settings, the query can trigger a “run out of memory” server remark; without it, the same call completes normally.

import osmnx as ox
import shapely
import datetime

snap_ts = datetime.datetime.fromisoformat("2023-12-31T10:15:23.355030")
wkt_area = "POLYGON ((12.492709372340677 41.916655635027965, 12.495766040251999 41.99143760629819, 12.76378053852936 41.984419025131984, 12.754692779733519 41.78305304410847, 12.487561402159495 41.79004473805951, 12.48453188956227 41.715248372182295, 12.21767031884794 41.72151888179064, 12.224974122071886 41.92295004575664, 12.492709372340677 41.916655635027965))"
ox.settings.requests_timeout = 200
ox.settings.use_cache = False
ox.settings.log_console = True
ox.settings.overpass_settings = f"[out:json][timeout:{ox.settings.requests_timeout}][date:\"{snap_ts}\"]"
poly_geom = shapely.wkt.loads(wkt_area)
kvs = {"highway": "residential"}
rows = ox.features_from_polygon(poly_geom, kvs)
print(len(rows))

If you remove the line that sets overpass_settings, the request completes, which points directly to the [date] filter as the trigger.

Why this fails with [date]

The Overpass back-end changes its evaluation strategy in the presence of a date filter. Instead of scanning only current object versions, it reconstructs object history to match the requested timestamp. That pushes memory consumption up significantly on the standard server.

Adding [date] to a query results in a much different evaluation strategy, because the data needs to be reconstructed from previous and current object versions to match the requested point in time. Without [date], only the current object versions are considered, which cuts down the data volume significantly.

A separate observation notes that other implementations with performance improvements handle the same class of queries in less memory.

By the way, I've checked the query on another implementation with some performance improvements in place. Both date queries took a second or two with 512M maxsize.

In short, the memory exhaustion originates from the Overpass API side when [date] is present.

The fix: give the server more memory

OSMnx exposes the Overpass memory cap via ox.settings.overpass_memory. Increasing it allows the server to execute historical queries that would otherwise fail. After raising the limit, the same request succeeds, although it may take noticeably longer to complete.

import osmnx as ox
import shapely
import datetime

snap_ts = datetime.datetime.fromisoformat("2023-12-31T10:15:23.355030")
wkt_area = "POLYGON ((12.492709372340677 41.916655635027965, 12.495766040251999 41.99143760629819, 12.76378053852936 41.984419025131984, 12.754692779733519 41.78305304410847, 12.487561402159495 41.79004473805951, 12.48453188956227 41.715248372182295, 12.21767031884794 41.72151888179064, 12.224974122071886 41.92295004575664, 12.492709372340677 41.916655635027965))"
ox.settings.use_cache = False
ox.settings.log_console = True
ox.settings.requests_timeout = 200
ox.settings.overpass_memory = "3G"
ox.settings.overpass_settings = f"[out:json][timeout:{ox.settings.requests_timeout}][date:\"{snap_ts}\"]"
poly_geom = shapely.wkt.loads(wkt_area)
kvs = {"highway": "residential"}
rows = ox.features_from_polygon(poly_geom, kvs)
print(len(rows))

This directly addresses the server-side remark “Query run out of memory using about 2048 MB of RAM.” After increasing the memory setting, the same historical query completes but can take a pretty long time.

Why it matters

Historical queries are a common need for reproducibility, audits, and time-aware analytics. Knowing that [date] flips Overpass into a different, more memory-hungry evaluation mode helps you interpret failures correctly. Adjusting the memory limit is the practical lever exposed by OSMnx. Other tuning attempts such as only raising requests_timeout or shrinking the max_query_area_size may not help in this scenario, as the bottleneck is memory rather than request duration or splitting.

Takeaways

If you need a snapshot-in-time result, keep the [date] parameter but raise ox.settings.overpass_memory until the server can handle your query, and expect longer runtimes. If you do not actually need historical state, omit [date] to stay within the default memory footprint. Keep ox.settings.log_console enabled while iterating so you can see Overpass’s server remarks immediately and confirm you are hitting a memory limit rather than something else. With these adjustments, the same OSMnx workflow remains intact, and you can reliably fetch features either for the present or for a specific timestamp.