Discussion:
autogenerate and schemas
YKdvd
2013-06-29 13:28:34 UTC
Permalink
I have a MySQL system where there is a "global" database, and a number of
identical project databases which also reference global. I connect a
SQLAlchemy engine to, say, "project1", and use a __tableargs__ of
{"scheme" : "global"} for tables that are in global. This works fine, but
as the number of projects increases it is becoming a pain to update them
with schema changes, and I'm trying to incorporate Alembic to help out. My
first step has been to run an autogenerate to see where my SQLA definitions
don't yet fully match the original structures I inherited.

Unfortunately, it looks like I can't do quite what I want with stock
alembic. My system's declarative base includes both global and project
definitions; If I set the target_metadata in alembic's env.py to it,
alembic only looks at the tables for "project" on the database and thinks
all the global ones have to be created. I found the "include_schemas"
configuration option, but this tells alembic to use inspection to find all
available schemas, which would pull _all_ project databases and any other
visible databases.

I was thinking of making a patch to auto generate.py, in
_produce_net_changes(). Right now if the configuration option
"include_schemas" is set True, it does the inspection to get all schemas.
I was thinking of doing something like include_schemas = set([None,
'global']) in env.py, and having the code bypass the inspection and use
this value when it is a set instead of a boolean. This seemed to work on a
very quick and dirty test - is there anything that I should watch out for,
or that makes this a bad idea in practice? In theory I assume I could
somehow extract the set of schemas from the metadata.
--
You received this message because you are subscribed to the Google Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alembic+***@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Michael Bayer
2013-06-29 14:49:44 UTC
Permalink
I have a MySQL system where there is a "global" database, and a number of identical project databases which also reference global. I connect a SQLAlchemy engine to, say, "project1", and use a __tableargs__ of {"scheme" : "global"} for tables that are in global. This works fine, but as the number of projects increases it is becoming a pain to update them with schema changes, and I'm trying to incorporate Alembic to help out. My first step has been to run an autogenerate to see where my SQLA definitions don't yet fully match the original structures I inherited.
Unfortunately, it looks like I can't do quite what I want with stock alembic. My system's declarative base includes both global and project definitions; If I set the target_metadata in alembic's env.py to it, alembic only looks at the tables for "project" on the database and thinks all the global ones have to be created.
If the issue is that autogenerate would need to scan more than one database to produce candidate migrations, you can modify your env.py to "run" multiple times against different databases. I have a setup where I handle migrations for two similar but still different databases in the same migration, and in env.py I flip through the two databases (and metadatas). It looks like this:

for name, session in sessions:
connection = session.connection()
context.configure(
connection,
target_metadata=_get_metadata(name),
include_symbol=lambda name, schema=None: name not in \
exclude_tablenames,
upgrade_token="%s_upgrades" % name,
downgrade_token="%s_downgrades" % name,
sqlalchemy_module_prefix=None,
render_item=render_item,
)
context.run_migrations(name=name)

so you can see for each database (I actually pull the DB from each of my ScopedSession objects) I set up a different metadata and other params. As autogenerate runs, it gathers tokens into the context for rendering the migration script, and the upgrade_token/downgrade_token directives split up the various instructions so that in my script.py.mako, I can render them separately:

def upgrade_somedb():
${somedb_upgrades if somedb_upgrades else "pass"}

def downgrade_somedb():
${somedb_downgrades if somedb_downgrades else "pass"}

def upgrade_someotherdb():
${someotherdb_upgrades if someotherdb_upgrades else "pass"}

def downgrade_someotherdb():
${someotherdb_downgrades if someotherdb_downgrades else "pass"}
I found the "include_schemas" configuration option, but this tells alembic to use inspection to find all available schemas, which would pull _all_ project databases and any other visible databases.
you use include_symbol to limit this (it's in the docs for environment.configure()):

def include_symbol(tablename, schema):
return schema in (None, "schema1", "schema2")

context.configure(
# ...
include_schemas = True,
include_symbol = include_symbol
)
--
You received this message because you are subscribed to the Google Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alembic+***@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Loading...