SQLite transactions for efficiency

The Background

When I first learned about database transactions, my understanding was that the main point was to be able to do multi-step operations atomically, so the whole set of steps either happened, or none of it happened.

I know from past projects with SQLite (I don't have much experience with other databases) that transactions are also very important for efficiency. For instance, if you are importing thousands of records into a database from, say, a CSV file, it is *much* faster if you handle each set of, say, 1,000 records in a transaction. My mental picture is that if you just INSERT each record without using transactions, SQLite has to somehow "finalize" or "tidy up" the table and related indexes after each INSERT. But if you do 1,000 INSERTs inside a transaction, somehow all of that finalization happens only once when the transaction is committed, and isn't much slower than what happens after each INSERT without a transaction.

Up to this point, my experience has been with using transactions for the two purposes just mentioned: (1) making sure a small set of statements either all happen, or none of them happens, and (2) creating lots of records all at once when importing data, and "batching" the creation of each 1,000 or 10,000 records using a transaction.

Now I'm working on a web application where lots of different things are happening all the time. Users might be updating their profile. Users might be liking other users' posts, or commenting on those posts. Users might be stepping through a list of posts and marking them as interesting or uninteresting, and so on. Think of any multi-user social app with lots of little pieces of data being constantly added or updated.

The Question

My question is whether and how to use transactions to handle all of these little database writes, for the purpose of efficiency. As I say, normally I would think of a transaction as containing a set of statements that work together to accomplish something, or a large number of heterogeneous statements, such as a bunch of insert into the same table.

Now I'm considering just opening a transaction (or perhaps one transaction per table), and letting a bunch of random INSERTs and UPDATEs happen from any number of users, and committing each transaction after a certain number of statements have happened (e.g., 1,000) and/or a certain amount of time has passed (e.g., five minutes).

My understanding (please correct me if I'm wrong) is that any database reads will incorporate not-yet-committed transactions in progress. For instance, if one user makes a comment on a certain topic, the SELECT statement for another user reading the comments for that topic will pick up the first user's comment, even if it is part of a transaction that is not yet committed.

Is this a common pattern for transactions with SQLite for the purpose of overall efficiency? (And for other databases?) Can you point me to somewhere that explains this as being a standard technique?

Also, my understanding is that even if the server process were to crash, when it restarted the active transactions would still be in the write-ahead log and would be committed as soon as the database is connect to? Is that true, or is there some risk of losing those actions in case of a crash?

Update

Thanks for the information posted so far. I realize that I'll need to provide some timing information to be able to make the discussion concrete for my particular case. I'll fill in the table below incrementally.

My application has a Users table and a Posts table. Users has 43 text fields, and my sample data for Users has 53 records, with a short string (less than 32 characters) in most of these fields. Posts has 17 text fields, and my sample data for Posts has 106 records (two posts per user), with a short string in most of these fields.

CREATE TABLE Users ( UserId TEXT PRIMARY KEY UNIQUE, EmailAddress TEXT,... )

CREATE TABLE Posts ( PostId TEXT UNIQUE, PostingUserId TEXT, City TEXT, ... )

https://preview.redd.it/ndejm7a8jm0e1.png?width=997&format=png&auto=webp&s=40f9d2c9e6dcf7c10e262befb4c3b23ad04363b7

So the lesson I've learned so far is: whether or not WAL mode is on, the time to insert a bunch of records is 10 to 20 times faster if you enclose the INSERT statements in a big transaction, vs. using a separate INSERT statement outside of a transaction for each (meaning each one does its own "transaction"). I already knew from past experience that that was true for journal mode.

Similarly, for the typical case of my application's usage pattern, represented by the second row in the table, it goes about 25 times faster if it's all done in a single commit, again whether in WAL mode or not.

If these numbers seem fishy, please let me know what I might be doing wrong. But all I'm doing differently across runs is starting a transaction before the test and committing the transaction to end the test (or not doing that), and turning on WAL mode or not turning on WAL mode.

So, I appreciate all the comments explaining to me how transactions work and what they are for, but I get such massive speedups when using transactions for the "side-effect" that I'm back to asking: Doesn't it seem rational to do the 1,000 statement / 5 minute rule that I outlined above? (or maybe 100 statements and 1 minute is good enough)

I do understand that by doing this I give up the ability to use transactions for their intended purpose, but in reality my application is robust against those types of inconsistencies. Furthermore, if I am careful to only end my mega transactions on a functional transaction boundary, I believe I'd get the same consistency guarantee as I'd get using just the inner/real transactions. The only difference is that I could lose a chunk of history if the server crashes.

Here's two more measurements in the lower right that show the performance of the proposed scheme. Slightly slower than one huge commit, but plenty fast.

https://preview.redd.it/bishksghzm0e1.png?width=1298&format=png&auto=webp&s=30e6cb65410a9749281bd1c5693e226629c8ff17