a curated list of database news from authoritative sources

October 28, 2024

The Tinybird DynamoDB Connector is now GA

The Tinybird DynamoDB Connector is now GA and ready for production use in all Workspaces. Last month, the DynamoDB Connector hatched into public beta, and quickly broke the record for fastest adoption of a Connector launch yet. We knew DynamoDB was popular, but the reception was beyond what we expected! “Tinybird's DynamoDB Connector has proved to be easy to set up, reliable and low-latency, while providing cross partition queries, with unlimited indexing, at sub second speeds for our massive

October 21, 2024

Group Commit and Transaction Dependency Tracking

MySQL 8.0 and newer change and improve how we measure and monitor replication lag. Even though multi-threaded replication (MTR) has been on by default for the last three years (since v8.0.27 released October 2021), the industry has been steeped in single-threaded replication for nearly 30 years. As a result, replication lag with MTR is a complicated topic because it depends on version, configuration, and more. This three-part series provides a detailed understanding, starting from what was originally an unrelated feature: binary log group commit.

October 15, 2024

October 10, 2024

Database Triggers

Triggers automatically run code whenever data in a table changes. A library in the convex-helpers npm package allows you to attach trigger functions to your Convex database.

Anatomy of a Throttler, part 2

Design considerations for implementing a database throttler with a comparison of singular vs distributed throttler deployments.

October 09, 2024

October 08, 2024

Building offline-first mobile apps with Supabase, Flutter and Brick

Brick is an all-in-one data manager for Flutter that handles querying and uploading between Supabase and local caches like SQLite. Using Brick, developers can focus on implementing the application without worrying about translating or storing their data.

Why You Shouldn't Forget to Optimize the Data Layout

Why You Shouldn’t Forget to Optimize the Data Layout

When you want to speed up your program, the obvious step is to recall the learnings of your data structure class and optimize the algorithmic complexity. Clearly, algorithms are the star of each program as swapping a hot O(n) algorithm with a lower complexity one, such as O(log n), yields almost arbitrary performance improvements. However, the way data is structured also affects performance significantly: Programs run on physical machines with physical properties such as varying data latencies to caches, disks, or RAM. After you optimized your algorithm, you need to consider these properties to achieve the best possible performance. An optimized data layout takes your algorithms and access patterns into account when deciding on how to store the bytes of your data structure on physical storage. Therefore, it can make your algorithms run several times faster. In this blog post, we will show you an example where we can achieve a 4x better read performance by just changing the data layout according to our access pattern.