|Re: [jgit-dev] Dependency on java.io.File|
On Wed, Mar 28, 2012 at 16:50, Carsten KleinDon't use the storage.file backend. Use storage.dfs. Its an abstract
> I am the owner of a small company that is currently developing an inhouse
> solution for keeping track of our projects, which also includes GIT/JGIT.
> The requirements for our custom solution are so that we must abstract from
> java.io.File, so that we can redirect file based access to whatever type
> of storage we want. This basically means that we would like to use both
> the bare repositories which represent "simple" object storages,
backend that doesn't care about the actual implementation used to
store files. It also is more flexible in the type of backend it will
accept, it is far more relaxed than a POSIX filesystem like
storage.file requires for correct operation.
Working trees are a challenge. :-(
> and the
> working trees in our custom solution.
Most work tree changes are supposed to be handled by the
> As JGIT makes heavy use of the java.io.File API, which presents a big
> problem to us.
> Our proposal would be to delegate the creation of derived classes of
> java.io.File to the org.eclipse.jgit.util.FS abstraction layer, using
> multiple instance level factory methods, e.g.
> FS#createNewFileInstance(...), and FS#createTempFile(...), with
> org.eclipse.jgit.util.FS providing for and using the standard java.io.File
> That way, the FS implementation can be exchanged, providing a custom
> java.io.File implementation in favor of the existing one.
WorkingTreeIterator, which is abstract. But it might not handle
This idea to make FS an abstract factory for various file operations
might be the only sane way out for the working tree.
I'm not so sure its as simple as that. A number of public APIs are
> In addition, the existing public APIs would not be changed except for a
> few minor changes to the InitCommand and the CloneCommand, where an
> additional setBuilder() method has to be introduced in order to "inject" a
> builder and the filesystem abstraction layer to be used.
probably going to be impacted to make sure an FS is supplied. It might
be able to be done as a non-breaking API change where FS.DETECTED is
used in the "old" method that doesn't take an FS and doesn't have a
Repository argument to assume the FS from. Not sure how many of those
there might be.
I'm not sure the storage.file backend should be divorced from the
> This can be achieved in a backwards compatible way, so that existing code
> does not have to be changed.
> We are currently working on a patch, involving backwards compatible
> changes to the public and internal APIs of the system. However, we would
> like to have some input first. What is your opinion on that, would this be
> a worthwile effort, especially considering it to get merged into the
> upstream repository someday?
java.io.File API. But the working tree operations should be. EGit
should be using the Eclipse Resource APIs to make changes in the
workspace when the file is mapped into the workspace, and I think it
already tries to do that in many contexts.
For non local file storage, storage.dfs is really a better backend.
Really. storage.dht might also be more suitable, I don't know what you
are trying to abstract away java.io to move onto something else. But
if its anything like a Hadoop storage system, storage.dfs is what you
Back to the top