Nope. SQLite is already available in the same process and using the same file system as your server. In some cases (ex Python) without adding any new dependencies. It's downright silly to try to think of a way to make it easier.
Here's your easy cheap and lightweight relational datastore API:
import sqlite3
conn = sqlite3.connect("db.sqlite3")
cur = conn.cursor()
cur.execute("SELECT * FROM products LIMIT 25")
print(cur.fetchall())
We have a bunch of serverless functions that are generating records that fit a relational data structure. (AWS Lambda)
We would like to write these records to a persistent relational data store so another downstream process can read it.
Amazon RDS seems like an overkill.
Amazon DynamoDB (NoSQL) seems like a misfit because we want to execute relational queries and some joins against this data.
We could write to CSV on S3 and query using Athena/Presto. Seems clunky and slow.
Am I missing any obvious solution here OR there is a space here for a service that offers a lightweight relational datastore that multiple loosely coupled readers and writers can use.
Use RDS. SQLite only works if you have one persistent machine with a disk and file system.
The problem with AWS lambda and SQLite are 1) network file systems often don’t support the APIs that SQLite needs to implement concurrent access; SQLite is not a database server 2) local storage for AWS lambda is ephemeral, your DB will be deleted, and if there’s two lambdas, they won’t be using the same DB.
Just use one EC2 instance and SQLite, or lambda with RDS.
Is it possible to offer SQLite as a managed / serverless offering?
A light weight and cheap relational data store that we just consumer using an API