Re: Ideas for shared backup server
- From: gordon@xxxxxxxxxxxxxxx (Gordon Henderson)
- Date: Sat, 28 Aug 2010 13:43:22 +0000 (UTC)
In article <75502225.QQBjFxQff4@xxxxxx>, alexd <troffasky@xxxxxxxxxxx> wrote:
I have a server [Ubuntu 10.04] that will be taking backups from several
other servers [CentOS, RHEL, Ubuntu] using rsnapshot. The 'client' servers
that it will be pulling backups from aren't necessarily trusted or under my
control. There are a couple of things I'm interested in:
Here's an intersting scenario: Your "untrusted" client has some encrypted
data on their server. You back it up to your server. Client gets busted,
you get busted and you can't provide the keys to the encrypted data. RIP
says go to jail, do not pass go, etc.
Or it's > level 0 pron...
But let's assume it's all above board... ;-)
- Using transparent compression for the storage volume [1Tb SATA for now,
mounted on /home/backup]. The link from the backup server to the clients is
~10Mbps so compresssion is unlikely to be a bottleneck unless it's
horrifically inefficient. Right now it's ext4 but the data set is only a few
gig at the moment so if anyone can think of a compelling reason, I can
juggle the data around and reformat /home/backup to something else. ZFS
[zfs-fuse] looks interesting because compression and deduplication are
touted features amongst other clever stuff; currently it's not recommended
for production use, but presumably this will change:
- Sharing out /home/backup read-only to all of the clients, but only
allowing them access to their own files. Given that the machines aren't
trusted, I think this rules out NFS - or does it? Is there a way that
root@box1 can read /home/backup/daily.0/box1/ but not
/home/backup/daily.0/box2/, and root@backup can see everything? The other
stumbling block is ownership; rsnapshot preserves the numeric UIDs, but when
sharing them back out over NFS, they appear to belong to nobody/nogroup if
If it's a backup, why are you sharing it back to them?
FWIW: I do this for my own hosted servers and a few trusted clients too.
I use ext3, I'm holding a few TB of remote data. Personally, I'd not
bother with anything else unless you have a real compelling reason for
it. You need the backup to be stable more than anything else. Worying
about the filesystem is the last thing I want on my mind. When I need
more than 8TB, I feel that ext4 will be mature enough.
Clients who's servers go "phut" are offline until I provide them with a
new server, pre-loaded with their data. I can do this from the backup
- they know this and accept the risk and the down-time. Would it be
any quicker with a more traditional form? Well, no - unless they have
on-site redundant servers. (Which my clients don't have because they're
cheap-skates, but I do have in remote data centres where I host)
Accidental deletion is caterd for with a 'backup' on the local server.
it's 'backup' in quotes as it's not really a backup, but using rsync and
cp -al, I can keep several days worth on the local server which gives
them accidental deletion protection without bothering me about it. I
actually pull the remote backups from that days rsync local snapshot.
(I build all servers with double the anticipated disk space and partition
half of it for the local backup)
It's not perfect, but it's workable.
For your NFS scenario - if you do decide to share the data back, yes
you can limit heirararchys to named (IP address) clients. Look for the
syntax of the /etc/exports file. You'd typically export each heirarchy
separate, so it might look like:
- Ideas for shared backup server
- From: alexd
- Ideas for shared backup server
- Prev by Date: Ideas for shared backup server
- Next by Date: Re: Ideas for shared backup server
- Previous by thread: Ideas for shared backup server
- Next by thread: Re: Ideas for shared backup server