-
Notifications
You must be signed in to change notification settings - Fork 301
Description
I noticed earlier today that insertions of several rows with the Postgres backend are done invididually. I browsed throught he source and noticed that it's indeed called out explicitly in there:
-- | Create multiple records in the database.
--
-- If you don't need the inserted @Key@s, use 'insertMany_'
--
-- SQL backends currently use the slow default implementation of
-- @mapM insert@
insertMany :: (MonadIO m, backend ~ PersistEntityBackend val, PersistEntity val)
=> [val] -> ReaderT backend m [Key val]
insertMany = mapM insert
It sounds like we might want to flip things around and have the SqlBackend function connInsertSql be one that supports multiple rows at once, and have a single-row insert use that function instead, passing a [[PersistValue]] to it internally.
We would have to rewrite insertSql' to build multiple rows in its logic for Postgres, MySQL and SQLite (which does support multirow insertion starting with v. 3.7.1). Basically this function would have to be upgraded:
-- | SQL code to be executed when inserting an entity.
insertSql' :: EntityDef -> [PersistValue] -> InsertSqlResult
insertSql' ent vals =
let sql = pack $ concat
[ "INSERT INTO "
, escapeDBName $ entityDB ent
, "("
, intercalate "," $ map (escapeDBName . fieldDB) $ entityFields ent
, ") VALUES("
, intercalate "," (map (const "?") $ entityFields ent)
, ")"
]
in case entityPrimary ent of
Just _ -> ISRManyKeys sql vals
Nothing -> ISRInsertGet sql "SELECT LAST_INSERT_ID()"
Thoughts on this? You guys want me to take a stab? Not sure this will break some form of backwards compat, it shouldn't assuming we only change the interface of SqlBackend.