How can one see the structure of a table in SQLite?

Invoke the sqlite3 utility on the database file, and use its special dot commands:
  • .tables will list tables
  • .schema [tablename] will show the CREATE statement(s) for a table or tables

There are many other useful built-in dot commands — see the documentation at http://www.sqlite.org/sqlite.html, section Special commands to sqlite3.

Example:

sqlite> entropy:~/Library/Mail>sqlite3 Envelope\ Index
SQLite version 3.6.12
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .tables
addresses              ews_folders            subjects
alarms                 feeds                  threads
associations           mailboxes              todo_notes
attachments            messages               todos
calendars              properties             todos_deleted_log
events                 recipients             todos_server_snapshot
sqlite> .schema alarms
CREATE TABLE alarms (ROWID INTEGER PRIMARY KEY AUTOINCREMENT, alarm_id,
                     todo INTEGER, flags INTEGER, offset_days INTEGER,
                     reminder_date INTEGER, time INTEGER, argument,
                     unrecognized_data BLOB);
CREATE INDEX alarm_id_index ON alarms(alarm_id);
CREATE INDEX alarm_todo_index ON alarms(todo);

Note also that SQLite saves the schema and all information about tables in the database itself, in a magic table named sqlite_master, and it’s also possible to execute normal SQL queries against that table. For example, the documentation link above shows how to derive the behavior of the .schemaand .tables commands, using normal SQL commands (see section: Querying the database schema).

Need help iterating through Python dict keys/values and INSERTing into SQL DB

I made a call to request data from Weight Gurus, which returns in the format of a python dictionary with keys and values of course. I need to take the data retrieved from this call and INSERT each key/value pair as an individual row.

So far I have managed to get the data from Weight Gurus and also establish a connection to my DB within python, but no luck with iterating through the dict to INSERT each value pair into an individual row.


# Login and get the auth token
data = {"email": "", "password": ""}
login_response = requests.post("https://api.weightgurus.com/v3/account/login", data=data)
login_json = login_response.json()

# grab all your data
data_response = requests.get(
    "https://api.weightgurus.com/v3/operation/",
    headers={
        "Authorization": f'Bearer {login_json["accessToken"]}',
        "Accept": "application/json, text/plain, */*",
    },
)

scale_data_json = data_response.json()
for entry in scale_data_json["operations"]:
    print(entry)


import pyodbc    
server = ''
database = ''
username = ''
password = ''
driver='{ODBC Driver 13 for SQL Server}'

cnxn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()

The dictionary in question is comprised of 9 keys. Each key is a column within my table called BodyComposition. Each key value pair should be an individual row. My table also has an increment ID field for the primary key if that makes a difference.

Solution:

Consider unpacking your collection of dictionaries into key/value tuples and then parameterize the values tuple in the loop. Assuming the below data structure (list of dictionaries):

scale_data_json["operations"] = [{'BMI': 0, 'BodyFat': 10, 
                                  'Entrytimestamp': '2018-01-21T19:37:47.821Z', 
                                  'MuscleMass': 50, 'OperationType': 'create',
                                  'ServerTimestamp':'2018-01-21T19:37:47.821Z', 
                                  'Source':'bluetooth scale', 
                                  'Water':37, 'Weight':21},
                                 {'BMI': 0, 'BodyFat': 10, 
                                  'Entrytimestamp': '2018-01-21T19:37:47.821Z', 
                                  'MuscleMass': 50, 'OperationType': 'create',
                                  'ServerTimestamp':'2018-01-21T19:37:47.821Z', 
                                  'Source':'bluetooth scale', 
                                  'Water':37, 'Weight':21},
                                ...]

Loop through each dictionary, unpack the values with zip and then bind them in cursor.execute:

# PREPARED STATEMENT
sql = """INSERT INTO BodyComposition (BMI, BodyFat, Entrytimestamp, 
                                      MuscleMass, OperationType, ServerTimestamp, 
                                      Source, Water, Weight) 
         VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
      """

# LOOP, UNPACK, BIND PARAMS
for entry in scale_data_json["operations"]:
    keys, values = zip(*entry.items())
    cursor.execute(sql, values)
    cnxn.commit()

Reuse same query across multiple group-bys?

I have a DB query that matches the desired rows. Let’s say (for simplicity):

select * from stats where id in (1, 2);

Now I want to extract several frequency statistics (count of distinct values) for multiple columns, across these matching rows:

-- `stats.status` is one such column
select status, count(*) from stats where id in (1, 2) group by 1 order by 2 desc;

-- `stats.category` is another column
select category, count(*) from stats where id in (1, 2) group by 1 order by 2 desc;

-- etc.

Is there a way to re-use the same underlying query in SqlAlchemy? Raw SQL works too.

Or even better, return all the histograms at once, in a single command?

I’m mostly interested in performance, because I don’t want Postgres to run the same row-matching many times, once for each column, over and over. The only change is which column is used for the histogram grouping. Otherwise it’s the same set of rows.

Solution:

User Abelisto‘s comment & the other answer both have the correct sql required to generate the histogram for multiple fields in 1 single query.

The only edit I would suggest to their efforts is to add an ORDER BY clause, as it seems from OP’s attempts that more frequent labels are desired at the top of the result. You might find that sorting the results in python rather than in the database is simpler. In that case, disregard the complexity brought on the order by clause.

Thus, the modified query would be:

SELECT category, status, count(*)
FROM stats
WHERE id IN (1, 2)
GROUP BY GROUPING SETS ( 
  (category), (status) 
)
ORDER BY 
  category IS NULL, status IS NULL, 3 DESC

It is also possible to express the same query using sqlalchemy.

from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()
class Stats(Base):
    __tablename__ = 'stats'
    id = Column(Integer, primary_key=True)
    category = Column(Text)
    status = Column(Text)

stmt = select(
    [Stats.category, Stats.status, func.count(1)]
).where(
    Stats.id.in_([1, 2])
).group_by(
    func.grouping_sets(tuple_(Stats.category), 
                       tuple_(Stats.status))
).order_by(
    Stats.category.is_(None),
    Stats.status.is_(None),
    literal_column('3').desc()
)

Investigating the output, we see that it generates the desired query (extra newlines added in output for legibility)

print(stmt.compile(compile_kwargs={'literal_binds': True}))
# outputs:
SELECT stats.category, stats.status, count(1) AS count_1 
FROM stats 
WHERE stats.id IN (1, 2) 
GROUP BY GROUPING SETS((stats.category), (stats.status)) 
ORDER BY stats.category IS NULL, stats.status IS NULL, 3 DESC

MySQL – How to combine data from two tables, using the value of one field as the filter to count values in another?

I’ve been racking my brain on this one, and the solution is probably so simple but I just can’t figure it out and have searched everywhere and can’t find a solution to this specific problem.

I’ve got two tables – one contains an IP connections list, for example:

Connections_Table

src            dst
192.168.1.1    1.2.3.4
192.168.1.1    2.2.2.2
192.168.1.1    3.3.3.3
192.168.1.1    4.4.4.4

The other table contains a list of IP addresses, for example:

Bad_Ip_Addresses_Table

ip
7.8.9.4
3.2.1.4
77.8.99.4
2.2.2.2
18.7.9.8

Here’s where I can’t find how to build this query… I’m trying build a table that shows src and dst from the first table, and whether the 2nd table contains the dst from the first table. In other words:

Results_Table

src            dst        match
192.168.1.1    1.2.3.4    0
192.168.1.1    2.2.2.2    1
192.168.1.1    3.3.3.3    0
192.168.1.1    4.4.4.4    0

Here’s probably the biggest catch: I’ve seen posts on SO where the solution involves creating a table and triggers. I can’t do that – This would be an AWS Kinesis Analytics SQL statement:

Connections_Table is ingested live, and Bad_Ip_Addresses_Table is a CSV loaded from AWS S3. On each row ingested I need to perform a SQL statement against the CSV to find if the dst ip is in the CSV.

Any suggestions?

Solution:

  • In MySQL, you can Left Join from Connections_Table to Bad_Ip_Addresses_Table, such that all dst values from Connections_Table are considered (whether a matching row exists or not).
  • You can then Group By on src and dst; and use Count() function to count the matches. Note that Count(null) = 0; so non-matching rows will return 0 (since there will be null values post the Left join).

In MySQL, try the following query:

SELECT
  ct.src,
  ct.dst, 
  COUNT(biat.ip) AS match 
FROM 
  Connections_Table AS ct 
LEFT JOIN Bad_Ip_Addresses_Table AS biat ON biat.ip = ct.dst 
GROUP BY ct.src, ct.dst 

WordPress configure dynamic url

Edit wp-config.php and add the following lines:
define(‘WP_HOME’, ‘/’);
define(‘WP_SITEURL’, ‘/’);

Change the values in database as:
update wp_options set option_value=”/” where option_name=”siteurl”

update wp_options set option_value=”/” where option_name=”home”

Now your siteurl IP/DNS will not be hardcoded.

Mysql add user with all privileges

Execute the following commands:

CREATE USER wordpressuser@’%’ IDENTIFIED BY ‘WordPress123!’;
GRANT ALL PRIVILEGES ON *.* TO ‘wordpressuser’@’%’ WITH GRANT OPTION;
FLUSH PRIVILEGES;

Android Room Persistence Library – How to find entities with ids contained in list of ids?

I am trying to do the following query in my DAO.

   @Query("SELECT * FROM objects WHERE obj_id IN :ids")
   List<Object> queryObjects(List<String> ids);

It gives me this compile-time error:

Error: no viable alternative at input 'SELECT * FROM objects WHERE obj_id IN :ids'

Both List<String> ids as well as String... ids and Sring[] ids don’t work. However, since I don’t know how many ids I will have in compile-time and therefore, I need a list/array and not varargs.

How can I make this SQL query work?

Solution:

You need parentheses:

@Query("SELECT * FROM objects WHERE obj_id IN (:ids)")
List<Object> queryObjects(List<String> ids);

(and FWIW, I filed an issue to try to get a better error message here)

Pandas Equivalent of SQL CROSS JOIN (Cartesian Product)

Let’s say I had two tables:

Table1:

   col1  col2
      0     1
      2     3

Table2:

   col3  col4
      5     6
      7     8

In SQL, if I made the following statement:

Select *
From Table1, Table2;

I would expect to get back a table with all combinations from both tables:

col1 col2 col3 col4
   0    1    5    6
   0    1    7    8
   2    3    5    6
   2    3    7    8

Is there a way to do the same thing with two dataframes in pandas?

Solution:

A standard idiom is using the merge on a dummy column.

df1.assign(foo=1).merge(df2.assign(foo=1)).drop('foo', 1)

   col1  col2  col3  col4
0     0     1     5     6
1     0     1     7     8
2     2     3     5     6
3     2     3     7     8

How to display all result from JLabel to JFrame (JDBC SQL query)

I have a problem with displaying all the labels contained in the loop. In the provided code, he receives only the last query result. It seems to me that the rest is covered. Any ideas ?

public static void ViewTable(Connection con) throws SQLException {
    java.sql.Statement stmt = null;
    JFrame submenu = new JFrame("View");
    submenu.setSize(500, 500);
    submenu.setVisible(true);
    JLabel text = new JLabel("Lista przystankow: ");
    submenu.add(text, BorderLayout.NORTH);
    String query = "SELECT * FROM Stacja";
    try {
        stmt = con.createStatement();
        ResultSet rs = stmt.executeQuery(query);
        while (rs.next()) {
            String idStacja = rs.getString("idStacja");
            String nazwa = rs.getString("nazwa");
            JLabel input = new JLabel(" " + idStacja + " " + nazwa);
            submenu.add(input, BorderLayout.CENTER);
            System.out.println(" " + idStacja + " " + nazwa);
        }
    } catch (SQLException e) {
        e.printStackTrace();
    }

}

Solution:

…receives only the last query result

Because you fill the JLabel in the while loop, with the same variable, instead you can use a global variable to combine all the results, then show the it in the end, you can use StringBuilder for example :

StringBuilder message = new StringBuilder();
while (rs.next()) {
    message.append(space)
        .append(rs.getString("idStacja"))
        .append(" ")
        .append(rs.getString("nazwa"))
        .append(" ");
}
JLabel input = new JLabel(message.toString());
submenu.add(input, BorderLayout.CENTER);

Denormalized table and duplicated data

My Java project uses pure JDBC for interacting with Oracle DB (v. 12). The transaction isolation level is Read Committed.

I have a highly denormalized table, which stores an entity in set of rows. I cannot change this. Unfortunately this table has to remain this way and the reasons are independent from me.

+------+------+---------+
| date | hash | ....... |
+------+------+---------+
| date | xyz  | ....... |
| date | xyz  | ....... |
| date | xyz  | ....... |

I have two columns identifying an entity – a date and a hash. Since each entity is stored as several rows, these columns are not really unique, or a primary key, but only indexed columns. Still I want to enforce a kind of “uniqueness”, meaning that only one entity exists at the time, no matter how many rows is it made of.

Such entity can be updated couple of times a day, resulting in different values, but also different number of rows.

To make all this happen, every time I update an entity, I do two or more queries within single transaction:

delete from "table" where "date" = ? and "hash" = ?
insert into "table" values (?, ?, .....)
insert into "table" ....
... -- as many inserts as needed to store whole entity

This works fine for a single instance of application. Unfortunately, I have 2 instances working simultaneously, trying to store exactly the same data at almost the same time (they are simply primary-backup instances, but backup is also persisting – this I also have no influence on).

If this was normalized table, the solution would be to use MERGE statement, but it won’t work here.

My current solution:

What I tried to do so far is to add one more column, an ID of the instance persisting, then executing INSERT statements using SELECT as a data source and putting condition to SELECTs that there must be no data for this date/hash and app ID, otherwise SELECT provides no data to insert.

I thought it would work, but apparently it does not. I still see duplicates. I think it’s because two transactions do their delete at first, still don’t see data yet to be committed by other transaction, thus perform inserts on their own. Then the “commit” is execute and boom. Both transactions insert their data.

Other approaches that I considered:

I guess also optimistic locking won’t work, because at the final version check both transactions can still consider version to not be changed, while they are actually changed by both transactions at the same time and are about to be committed this way.

I know I could switch transaction isolation to SERIALIZABLE, but it isn’t perfect either (first of all, Oracle driver will not serialize queries, but will do optimistic approach and fail with error in case of concurrent modification, I don’t like that, it’s a “programming by exception” paradigm, an anti-pattern, then the second disadvantage is performance of course).

Are there any other solutions to such a problem?

Solution:

Your requirements, as I read them are:

  • Database structure cannot change
  • Both application must update exactly the same data at the same time
  • Optimistic locking is out because it might cause errors or a performance degradation
  • Pessimistic locking is out for the same reasons as optimistic locking

It seems like the most important thing is not what data you’re changing but what data you are reading. You need a method of determining what data the users of your system (I can’t tell whether these applications are just maintaining data or also using it) should be served.

I assume what your current queries for serving data are something like:

select * from table where date = :1 and hash = :2

If you change this to the following then you’ll always select the most recent data and if there are duplicates in time you’ll pick the first application (essentially random – change to whatever ordering you want)

select *
  from ( select t.*
              , rank() over (partition by hash 
                                 order by date desc, app_id desc) as rnk
           from table t
                )
 where rnk = 1

You can maybe put this in a view?

Then, you’re able to essentially running two separate tables in a single table. You can use MERGE etc. and can change your DELETE/INSERT statements to:

merge into table o
using (select :1, :2 ... ) n
   on ( o.date = n.date
       and o.hash = n.hash
       and o.app_id = n.app_id
           )
 when matched then
      update
         set ...
 when not matched then
      insert (...

commit;

delete from table
 where date < :1 
   and hash = :2

commit;

where you’re using the same date and hash from your MERGE statement. If the DELETE fails you don’t really mind – you’re protected from selecting the wrong data because you’ve changed your SELECT queries.


Personally, I’d acknowledge that one of your requirements have to change.

If there’s any plans for adding additional applications I’d accept the performance degradation and perform updates on this table serially using a queuing mechanism.

If there’s no plans for adding additional applications take the simple approach now and start using a locking strategy (not pretty) and just handle some known errors.