Skip to content

chunk iterator support for results #4959

@sumau

Description

@sumau

It would be helpful to be able to 'automatically' iterate through fetchmany, similar to pd.read_sql(chunksize) or as suggested here:
http://code.activestate.com/recipes/137270-use-generators-for-fetching-large-db-record-sets/

So for example instead of writing the following in this example:

    with engine.connect() as conn:
        result = conn.execution_options(stream_results=True).execute(
            Customer.__table__.select().limit(n)
        )
        while True:
            chunk = result.fetchmany(10000)
            if not chunk:
                break
            for row in chunk:
                row["id"], row["name"], row["description"]

you could write something like:

    with engine.connect() as conn:
        result = conn.execution_options(stream_results=True, arraysize=10000).execute(
            Customer.__table__.select().limit(n)
        )
   for chunk in result:
          for row in chunk:
                row["id"], row["name"], row["description"]

I'm using arraysize to be in accordance with pep249 dbapi cursor.arraysize but perhaps chunksize might be better

Metadata

Metadata

Assignees

No one assigned

    Labels

    alchemy 2goes along with the 2.0 milestone to aid in searchingengineengines, connections, transactions, isolation levels, execution optionsfeatureresult fetching API improvements

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions