Mailing List Archive
tlug.jp Mailing List tlug archive tlug Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]Re: [tlug] distributed file systems
- Date: Mon, 15 Feb 2010 18:17:18 +0900
- From: Edmund Edgar <lists@example.com>
- Subject: Re: [tlug] distributed file systems
- References: <4d3714b51002141821r1b903f03j7a567122720e9c15@example.com> <20100215035353.GJ24817@example.com> <4d3714b51002142226p338664a8l34a9d918eea2d9bd@example.com> <20100215072802.GB25775@example.com> <4d3714b51002150048j5a4b62dfx3ecca5ba5c85a2e6@example.com>
Sort-of sounds like apart from your 50 text files you could just keep the uploaded files in one place and run a caching proxy in each of your remote locations (or use a CDN and rely on the fact that they already run a caching proxy in each of your remote locations). Failing that, if you really have to be able to upload from multiple locations to make the uploads faster, do you have the option of keeping their files in separate namespaces, so that for any given file, you know that the master lives in location X, and the slaves / cached copies live in location Y and Z? If so you could still do something on the lines of master/slaves (one filesystem/caching proxies, one filesystem/CDN). Then do something else for your 50 or so pseudo-database-y text files. (Version control, "click here to rsync" or whatever.) Edmund Edgar
- References:
- [tlug] distributed file systems
- From: Sach Jobb
- Re: [tlug] distributed file systems
- From: Curt Sampson
- Re: [tlug] distributed file systems
- From: Sach Jobb
- Re: [tlug] distributed file systems
- From: Curt Sampson
- Re: [tlug] distributed file systems
- From: Sach Jobb
Home | Main Index | Thread Index
- Prev by Date: Re: [tlug] distributed file systems
- Next by Date: Re: [tlug] distributed file systems
- Previous by thread: Re: [tlug] distributed file systems
- Next by thread: Re: [tlug] distributed file systems
- Index(es):
Home Page Mailing List Linux and Japan TLUG Members Links