You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While trying to run osm2ch on large datasets like europe-latest.osm.pbf, the process keeps killing itself after some time. Even on a large cloud based machine, the resources quickly reach to max and OS eventually kills the process. Are there any performance benchmarks available? Are there any recommendations for processing large files? Is there a way to limit the use of memory and work with a mix of disk, ram and cpu instead?
The text was updated successfully, but these errors were encountered:
Nice question there.
I think it is not possible to process really big files due the nature of ch library (+ it has an issue corresponding to performance.
Complex graph topology could be a problem too: since a lot of shortcuts could be created (x2 of edges number).
I think the steps to improve this tool are:
Reach max performance on ch library when it generates shortucts:
Better heuristics
Threaded batch processing
Something else like preallocation of priority queues memory for local Dijkstras (witness search)
Need to do some parallelism on file loading (but be careful, since graph data structure is not thread safe for inserting vertices and edges)
LdDl
changed the title
[DOCUMENTATION]
[DOCUMENTATION] Library restrictions
Feb 21, 2022
While trying to run osm2ch on large datasets like europe-latest.osm.pbf, the process keeps killing itself after some time. Even on a large cloud based machine, the resources quickly reach to max and OS eventually kills the process. Are there any performance benchmarks available? Are there any recommendations for processing large files? Is there a way to limit the use of memory and work with a mix of disk, ram and cpu instead?
The text was updated successfully, but these errors were encountered: