How do I improve performance when importing a file gdb into postgresql?

المشرف العام

Administrator
طاقم الإدارة
I have a file geodatabase created with Arcmap 10.1 that contains one feature class with 650 million points. The feature class contains the shape field and an identifier. The .gdb is approximately 26GB.

I'm using ogr2ogr to import the .gdb into a PostgreSQL database. I started the process and the features are being inserted into the database. I verified this with a SELECT COUNT(*) FROM and the number of rows is increasing.

Based on my rudimentary approach to timing - watch the PC clock and use the above SELECT - I estimate the import is progressing at 7 million features per hour. Some quick mental math and I'm looking at 90+ hours to import the .gdb - if the rate remains the same and doesn't degrade over time. This is roughly 2000 features per second.

The database is an out of the box PostgreSQL 9.4.5 / PostGIS 2.2 installation on a Windows 10 PC with a SSD (400 GB free / 512GB total) and 16GB RAM. No PostgreSQL configuration changes have been made yet as I'm not sure what, if anything, I should do.

Is there anything I can do to increase performance of the .gdb import? Perhaps a recommendation that maximizes writes and then I can rollback the changes after the imports? I could let this process run until completion but there will be other imports I need to perform and I'd rather not wait four days for each one. I have full control of this PC so I'll cautiously say I'm open to any suggestions.



أكثر...
 
أعلى