My own insane idea ; make jgit aware of content containers other than the filesystem. I've brought it up before, but probably not articulated it tremendously well (or it's considered niche, which is probably a fair assessment).
There are already repository backends that don't use the file system ; what I would be interested in is a working tree implementation that wasn't based on the file system, the idea being to provide version control services for systems using Key / Value stores (or at a stretch, RDBMS, but I reckon that's a lot harder to do), and ultimately, systems storing domain models in such storage.
CDO already provides revision control for domain models but as far as I can make out it's implementation is more like CVS in terms of design.
I have some ideas about how to make this practical but I never have the time to dig deep enough into the innards of JGit to see if it's workable, let alone workable in a way that could continue to co-exist in the same codebase.
Why do this? The kind of workflows that Git permits are great, as outlined by Clay Shirky [1] ; but currently, to realize the benefits one must store ones content as plain text files.
I have a project using Bazaar which operates in what I call the "grey area" between bodies of source code and full-on version controlled everything ; the content is being managed using a UML modeller which stores one model atom per file (selected for this property amongst other things). With a few little hacks like a "merge driver" that applies HTML Tidy stored as a single line attributes to documentation and turns it into a CDATA section (and then packs it back up afterwards), we generally get away with XML not being a perfect format for merging with text utils. The content count is approaching the limits of what is practical (on Windows file systems, anyway) ; we have a number of these domains that