|Refactoring Databases Dynamically [message #997778]
||Tue, 08 January 2013 18:18
| Scott Hull
Registered: July 2012
Before I begin, I wanted to thank Mike and everyone else on the forum for the helpful suggestions for fixing my issues with Gemini. Right now, I have successfully figured out a way to implement Gemini into the production level of our code and it is working beautifully.
But here comes the curve ball: How do you manage your ever-changing annotated datastructure?
Many of the suggestions online have suggested managing the data structures with native SQL queries in between iterations of a product. This works fine in theory, but our product goes through daily builds and a small change on the JPA annotated data structure trashes the database directory the next day. Although our data structures are "stable" at this point, we find bugs and require minor adjustments to meet the needs of our customers.
So we are looking for an automated tool/techniques to help convert between databases to at least preserve most of the attributes in between build cycles.
So here is the question:
What is the best way to generically migrate your database that is created with JPA?
For our product, here is the path we are considering:
Create a diagnose and conversion tool called "Save My Assets" (SMA for short). The tool, in theory, will do the following:
If a database has changed because of the following:
-An attribute was added.
-An attribute was deleted.
-A new table was added (new object mapping)
-A new table was deleted.
Solve it following this method:
1.)Using native SQL calls, grab all the column names off each table.
2.)By using native SQL calls, grab the associated values to the attributes for each table objects and store them in a key-value pair arrangement (a map of String, Object, for example). This will work for base attributes (string, int, etc), but I have not figured out how to handled this for other mapped entities or for ArrayList just yet.
3.) Remove selected item from table.
4.) Create the new annotated POJO -> copy attributes from the map to each POJO and linked entities as necessary (probably a new Entity.fromString(String_k/v_pair_mapped_representation) operation). If an attribute is missing or added, always default to the value in the default constructor.
5.) Persist new item back into the database.
Apply these steps for all pieces of the database (and linked pieces as necessary).
I believe this will be the best solution I can come up with, nevertheless I am looking for any more generically automated tools or techniques that scale better than writing and maintaining a custom set of SQL calls between each iteration.
I want to know you all's thoughts on this process. Feedback is very welcome at this point and stage!
[Updated on: Tue, 08 January 2013 18:44]
Report message to a moderator
|Re: Refactoring Databases Dynamically [message #997822 is a reply to message #997778]
||Tue, 08 January 2013 20:20
| Michael Keith
Registered: July 2009
Glad that things are working with your application and that Gemini JPA is of some use to you.
To your schema migration question, I'm not 100% clear on the use case. You talked about iterations of the product and I thought you were referring to migrating production environments across product releases. But then you mentioned daily builds and it sounded more like the object model (and hence the schema) was changing during development. These are two fairly different problems that are typically solved in different ways.
In production it would generally be a bad idea to change a customer's database schema with a new release. Customers would typically have a cow if the database schema and the data got bulk changed in the course of an application upgrade. In many cases it would not even be possible because the database schema is outside the control of the application and the application had to map to the existing database schema to begin with. Also, there may be other applications that rely on that schema so it simply cannot be changed.
However, assuming that the database is shipped with, or private to, the application and that the customer has no qualms with a bulk data operation as part of the upgrade, the most structured way of modifying the schema and the data within it is typically to create an upgrade script that makes exactly the schema changes necessary for the new release. I wouldn't get into the business of creating a whole new program to do this, I would just use one of the schema migration/comparison/diff tools and use them to compare the two schemas. They already solve this problem. Take a look at Flyway and see if it does what you want.
During development phases, migration is not typically considered a problem because there is no live data in the database. The schema can just be removed and recreated as necessary.
Not sure if any of this helps...
[Updated on: Tue, 08 January 2013 20:23]
Report message to a moderator
Powered by FUDForum
. Page generated in 0.04169 seconds