-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
chunk iterator support for results #4959
Copy link
Copy link
Closed
Labels
alchemy 2goes along with the 2.0 milestone to aid in searchinggoes along with the 2.0 milestone to aid in searchingengineengines, connections, transactions, isolation levels, execution optionsengines, connections, transactions, isolation levels, execution optionsfeatureresult fetching API improvements
Milestone
Metadata
Metadata
Assignees
Labels
alchemy 2goes along with the 2.0 milestone to aid in searchinggoes along with the 2.0 milestone to aid in searchingengineengines, connections, transactions, isolation levels, execution optionsengines, connections, transactions, isolation levels, execution optionsfeatureresult fetching API improvements
It would be helpful to be able to 'automatically' iterate through fetchmany, similar to pd.read_sql(chunksize) or as suggested here:
http://code.activestate.com/recipes/137270-use-generators-for-fetching-large-db-record-sets/
So for example instead of writing the following in this example:
you could write something like:
I'm using arraysize to be in accordance with pep249 dbapi cursor.arraysize but perhaps chunksize might be better