Easiest way to replicate (copy? Export and import?) a large, rarely changing postgreSQL database -
i have imported 200 gb of census data postgresql 9.3 database on windows 7 box. import process involves many files , has been complex , time-consuming. i'm using database convenient container. existing data if ever change, , updating external data @ once quarter (though i'll adding , modifying intermediate result columns on more frequent basis. i'll call data in database on desktop “master.” queries come same machine, not remote terminals.
i put copies of data on 3 other machines: 2 laptops, 1 windows 7 , 1 windows 8, , on ubuntu virtual machine on windows 7 desktop well. have installed copies of postgresql 9.3 on each of these machines, empty of data. need able both reads , writes on copies. ok, , indeed prefer it, if changes in daughter databases not propagate backwards primary database on desktop. i'd want update daughters master 1 4 times year. if wiped out intermediate results on daughter databases not bother me.
most of replication techniques have read seem worried transaction-by-transaction replication of live , changing server, , perfect history of queries & changes. overkill me. there way replicate copying files 1 postgresql instance another? (if replication name of specific form of copying, i'm trying ask more generic question). or maybe restoring each (empty) instance backup file of master? or of asking postgresql create , export (ideally on external hard drive) kind of postgresql binary of data instance of postgresql can import, without having define tables , data types , forth again?
this question motivated desire work around home wifi/lan setup slow – tenth or less of speed of file copies external hard drive. if there straightforward way imported data 1 machine transference of (ideally compressed) binary files, work best situation.
while perhaps copy data directory directly mentioned nick barnes in comments above, recommend using combination of pg_dump , pg_restore, dump self-contained file can dispersed other copies.
you can run pg_dump on master dump of db. recommend using options -fc -j3
use custom binary format (instead of dumping in sql format; should smaller , perhaps faster well) , dump 3 tables @ once (this can adjusted or down depending on disk throughput capabilities of machine , number of cores has).
then run dropdb on copies, createdb recreate empty db of same name, , run pg_restore on new empty db restore dump file db. want use options -d <dbname> -f <dump_file> -j3
(again adjusting number -j
according abilities of machine).
when want refresh copies new content master db, repeat above steps
Comments
Post a Comment