From: Tony Magtalas (ttk67@yahoo.com)
Date: Fri Jul 07 2006 - 14:41:21 EDT
I have a project to copy a bunch of directories from a 2 Terabytes file system
into a few other smaller file systems on the same local Sun Solaris 2.9 server. There are hundred of subdirectories underneath as well as symbolic links pointing everywhere.
This is a fairly secured intranet environment (ssh) but I managed to get
rdist working.
I am thinking of implementing rdist (ie rdist -c source dir server:/destination)
because it can follow symbolic links, preserve date/time/ownership but I was
told it is slow and it can take a couple of days to copy a 2 TB file system.
My friend suggested to use:
1. ufsdump 0f - source |cd destination; ufsrestore -xf - )
2. cd source; tar -cf - . | cd destination; tar -xf - )
3. rsync ssh -arulzp source destination
I would also like to solicit collective opinion from the net.
What do you think? what is the best way? (robustness, speed)
I don't really want to spend 2 full days to copy a 2 TB file system
Please advise. I will summarize if there is enough interest.
Thanks,
Tony
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
This archive was generated by hypermail 2.1.7 : Wed Apr 09 2008 - 23:40:21 EDT