I am querying a PostgreSQLtable with a SMALLINT (16-bit) column. All the values are positive, but conincidentally in the range 0-255. My query returns a RecordSet' and accessing individual values via value( int col, int row). When the value of the column is above 127, it is as if the value is treated as a signed char` (8-bit) at some point in this code:
auto value( recSet.value( j, i ) );
std::string stringVal;
value.convert( stringVal );
That is: stringVal will end up as "-54" when the same query via psql will yield 202. I cannot see into the recSet or value variables in this code to determine at which point the 8-bit/16-bit confusion occurs. I have not investigated use of columnType or columnPrecision or similar at this stage to see if they can shine any light on this issue.
I am querying a PostgreSQLtable with a SMALLINT (16-bit) column. All the values are positive, but conincidentally in the range 0-255. My query returns a
RecordSet' and accessing individual values viavalue( int col, int row). When the value of the column is above 127, it is as if the value is treated as asigned char` (8-bit) at some point in this code:That is:
stringValwill end up as"-54"when the same query viapsqlwill yield 202. I cannot see into therecSetorvaluevariables in this code to determine at which point the 8-bit/16-bit confusion occurs. I have not investigated use ofcolumnTypeorcolumnPrecisionor similar at this stage to see if they can shine any light on this issue.