December 15, 2025
$50 PlanetScale Metal is GA for Postgres
How We Made Writes 10x Faster for Search
December 12, 2025
No, MongoDB Does Not Mean Skipping Design
With MongoDB, domain-driven design empowers developers to build robust systems by aligning the data model with business logic and access patterns.
Too often, developers are unfairly accused of being careless about data integrity. The logic goes: Without the rigid structure of an SQL database, developers will code impulsively, skipping formal design and viewing it as an obstacle rather than a vital step in building reliable systems.
Because of this misperception, many database administrators (DBAs) believe that the only way to guarantee data quality is to use relational databases. They think that using a document database like MongoDB means they can’t be sure data modeling will be done correctly.
Therefore, DBAs are compelled to predefine and deploy schemas in their database of choice before any application can persist or share data. This also implies that any evolution in the application requires DBAs to validate and run a migration script before the new release reaches users.
However, developers care just as much about data integrity as DBAs do. They put significant effort into the application’s domain model and avoid weakening it by mapping it to a normalized data structure that does not reflect application use cases.
Different Database Models, Different Data Models
Relational and document databases take different approaches to data modeling.
In a document database, you still design your data model. What changes is where and how the design happens, aligning closely with the domain model and the application’s access patterns. This is especially true in teams practicing domain‑driven design (DDD), where developers invest time in understanding domain objects, relationships and usage patterns.
The data model evolves alongside the development process — brainstorming ideas, prototyping, releasing a minimum viable product (MVP) for early feedback and iterating toward a stable, production-ready application.
Relational modeling often starts with a normalized design created before the application is fully understood. This model must then serve diverse future workloads and unpredictable data distributions. For example, a database schema designed for academic software could be used by both primary schools and large universities. This illustrates the strength of relational databases: the logical model exposed to applications is the same, even when the workloads differ greatly.
Document modeling, by contrast, is tailored to specific application usage. Instead of translating the domain model into normalized tables, which adds abstraction and hides performance optimizations, MongoDB stores aggregates directly in the way they appear in your code and business logic. Documents reflect the business transactions and are stored as contiguous blocks on disk, keeping the physical model aligned with the domain schema and optimized for access patterns.
Here are some other ways these two models compare.
Document Modeling Handles Relationships
Relational databases are often thought to excel at “strong relationships” between data, but this is partly because of a misunderstanding of the name — relations refers to mathematical sets of tuples (rows), not to the connections between them, which are relationships. Normalization actually loosens strong relationships, decoupling entities that are later matched at query time via joins.
In entity-relationship diagrams (ERDs), relationships are shown as simple one-to-one or one-to-many links, implemented via primary and foreign keys. ERDs don’t capture characteristics such as the direction of navigation or ownership between entities. Many-to-many relationships are modeled through join tables, which split them into two one-to-many relationships. The only property of a relationship in an ERD is to distinguish one-to-one (direct line) from one-to-many (crow’s foot), and the data model is the same whether the “many” is a few or billions.
Unified Modeling Language (UML)-class diagrams in object-oriented design, by comparison, are richer: They have a navigation direction and distinguish between association, aggregation, composition and inheritance. In MongoDB, these concepts map naturally:
- Composition (for instance, an order and its order lines) often appears as embedded documents, sharing a life cycle and preventing partial deletion.
- Aggregation ( a customer and their orders) uses references when life cycles differ or when the parent ownership is shared.
- Inheritance can be represented via polymorphism, a concept ERDs don’t directly capture and workaround with nullable columns.
Domain models in object-oriented applications and MongoDB documents better mirror real-world relationships. In relational databases, schemas are rigid for entities, while relationships are resolved at runtime with joins — more like a data scientist discovering correlations during analysis. SQL’s foreign keys prevent orphaned rows, but they aren’t explicitly referenced when writing SQL queries. Each query can define a different relationship.
Schema Validation Protects Data Integrity
MongoDB is schema-flexible, not schema-less. This feature is especially valuable for early-stage projects — such as brainstorming, prototyping, or building an MVP — because you don’t need to execute Data Definition Language (DDL) statements before writing data. The schema resides within the application code, and documents are stored as-is, without additional validation at first, as consistency is ensured by the same application that writes and reads them.
As the model matures, you can define schema validation rules directly in the database — field requirements, data types, and accepted ranges. You don’t need to declare every field immediately. You add validation as the schema matures, becomes stable, and is shared. This ensures consistent structure when multiple components depend on the same fields, or when indexing, since only the fields used by the application are helpful in the index.
Schema flexibility boosts development speed at every stage of your application. Early in prototyping, you can add fields freely without worrying about immediate validation. Later, with schema validation in place, you can rely on the database to enforce data integrity, reducing the need to write and maintain code that checks incoming data.
Schema validation can also enforce physical bounds. If you embed order items in the order document, you might validate that the array does not exceed a certain threshold. Instead of failing outright — like SQL’s check constraints (which often cause unhandled application errors) — MongoDB can log a warning, alerting the team without disrupting user operations. This enables the application to stay available while still flagging potential anomalies or necessary evolutions.
Application Logic vs. Foreign Keys
In SQL databases, foreign keys are constraints, not actual definitions of relationships, which are evaluated at query time. SQL joins define relationships by listing columns as filter predicates, and foreign keys are not used in the JOIN clause. Foreign keys help prevent certain anomalies, such as orphaned children or cascading deletes, that arise from normalization.
MongoDB takes a different approach: By embedding tightly coupled entities, you solve major integrity concerns upfront. For example, embedding order lines inside their order document means orphaned line items are impossible by design. Referential relationships are handled by application logic, often reading from stable collections (lists of values) before embedding their values into a document.
Because MongoDB models are built for known access patterns and life cycles, referential integrity is maintained through business rules rather than enforced generically. In practice, this better reflects real-world processes, where updates or deletions must follow specific conditions (such asa price drop might apply to ongoing orders, but a price increase might not).
In relational databases, the schema is application-agnostic, so you must protect against any possible Data Manipulation Language (DML) modifications, not just those that result from valid business transactions. Doing so in the application would require extra locks or higher isolation levels, so it’s often more efficient to declare foreign keys for the database to enforce.
However, when domain use cases are well understood, protections are required for only a few cases and can be integrated into the business logic itself. For example, a product will never be deleted while ongoing transactions are using it. The business workflow often marks the product as unavailable long before it is physically deleted, and transactions are short-lived enough that there’s no overlap, preventing orphans without additional checks.
In domain‑driven models, where the schema is designed around specific application use cases, integrity can be fully managed by the application team alongside the business rules. While additional database verification may serve as a safeguard, it could limit scalability, particularly with sharding, and limit flexibility. An alternative is to run a periodic aggregation pipeline that asynchronously detects anomalies.
Next Time You Hear That Myth
MongoDB does not mean “no design.” It means integrating database design with application design — embedding, referencing, schema validation and application‑level integrity checks to reflect actual domain semantics.
This approach keeps data modeling a first‑class concern for developers, aligning directly with the way domain objects are represented in code. The database structure evolves alongside the application, and integrity is enforced in the same language and pipelines that deliver the application itself.
In environments where DBAs only see the database model and SQL operations, foreign keys may appear indispensable. But in a DevOps workflow where the same team handles both the database and the application, schema rules can be implemented first in code and refined in the database as specifications stabilize. This avoids maintaining two separate models and the associated migration overhead, enabling faster, iterative releases while preserving integrity.
mlrd: DynamoDB-Compatible API on MySQL
Introducing mlrd (“mallard”) to the world: a DynamoDB-compatible API on MySQL.
Crazy, but it works really well and I’m confident it will help a lot of business save a lot of money.
Here’s why.
Support for Crunchy Hardened PostgreSQL Ends Soon: Don’t Get Caught Off Guard.
December 11, 2025
Introducing Amazon Aurora powers for Kiro
Sysbench for MySQL 5.6 through 9.5 on a 2-socket, 24-core server
This has results for the sysbench benchmark on a 2-socket, 24-core server. A post with results from 8-core and 32-core servers is here.
tl;dr
- old bad news - there were many large regressions from 5.6 to 5.7 to 8.0
- new bad news - there are some new regressions after MySQL 8.0
The read-heavy microbenchmarks are run for 600 seconds and the write-heavy for 900 seconds. The benchmark is run with 16 clients and 8 tables with 10M rows per table.
I provide charts below with relative QPS. The relative QPS is the following:
(QPS for some version) / (QPS for base version)
- from 5.6 to 5.7 there are big improvements for 5 tests, no changes for 2 tests and small regressions for 2 tests
- from 5.7 to 8.0 there are big regressions for all tests
- from 8.0 to 9.5 performance is stable
- for 9.5 the common result is ~20% less throughput vs 5.6
- context switch rate (cs/o) is stable, mutex contention hasn't changed
- CPU per query (cpu/o) drops by 35% from 5.6 to 5.7
- CPU per query (cpu/o) grows by 23% from 5.7 to 8.0
- CPU per query (cpu/o) is stable from 8.0 through 9.5
- from 5.6 to 5.7 throughput drops by 10% to 15%
- from 5.7 to 8.0 throughput drops by about 15%
- from 8.0 to 9.5 throughput is stable
- for 9.5 the common result is ~30% less throughput vs 5.6
- context switch rates are low and can be ignored
- CPU per query (cpu/o) grows by 11% from 5.6 to 5.7
- CPU per query (cpu/o) grows by 15% from 5.7 to 8.0
- CPU per query (cpu/o) is stable from 8.0 through 9.5
- from 5.6 to 5.7 there are big improvements for 2 tests, no changes for 1 tests and regressions for 5 tests
- from 5.7 to 8.0 there are regressions for all tests
- from 8.0 through 9.5 performance is stable
- for 9.5 the common result is ~25% less throughput vs 5.6
- context switch rates are similar
- CPU per query (cpu/o) grows by 16% from 5.6 to 5.7
- CPU per query (cpu/o) grows by 15% from 5.7 to 8.0
- CPU per query (cpu/o) is stable from 8.0 through 9.5
- from 5.6 to 5.7 there are big improvements for 9 tests and no changes for 1 test
- from 5.7 to 8.0 there are regressions for all tests
- from 8.4 to 9.x there are regressions for 8 tests and no change for 2 tests
- for 9.5 vs 5.6: 5 are slower in 9.5, 3 are similar and 2 are faster in 9.5
- in 5.7, CPU per insert drops by 30% while context switch rates are stable vs 5.6
- in 8.0, CPU per insert grows by 36% while context switch rates are stable vs 5.7
- in 9.5, CPU per insert grows by 3% while context switch rates grow by 23% vs 8.4
A Christmas Carol of Two Databases
December 10, 2025
The insert benchmark on a small server : MySQL 5.6 through 9.5
This has results for MySQL versions 5.6 through 9.5 with the Insert Benchmark on a small server. Results for Postgres on the same hardware are here.
tl;dr
- good news - there are no large regressions after MySQL 8.0
- bad news - there are many large regressions from 5.6 to 5.7 to 8.0
The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.
- cached - the values for X, Y, Z are 30M, 40M, 10M
- IO-bound - the values for X, Y, Z are 800M, 4M, 1M
- l.i0
- insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
- l.x
- create 3 secondary indexes per table. There is one connection per client.
- l.i1
- use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
- l.i2
- like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
- Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
- qr100
- use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
- qp100
- like qr100 except uses point queries on the PK index
- qr500
- like qr100 but the insert and delete rates are increased from 100/s to 500/s
- qp500
- like qp100 but the insert and delete rates are increased from 100/s to 500/s
- qr1000
- like qr100 but the insert and delete rates are increased from 100/s to 1000/s
- qp1000
- like qp100 but the insert and delete rates are increased from 100/s to 1000/s
When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures:
- insert/s for l.i0, l.i1, l.i2
- indexed rows/s for l.x
- range queries/s for qr100, qr500, qr1000
- point queries/s for qp100, qp500, qp1000
- the load step (l.i0) is almost 2X faster for 5.6.51 vs 8.4.7 (relative QPS is 0.59)
- the create index step (l.x) is more than 2X faster for 8.4.7 vs 5.6.51
- the first write-only steps (l.i1) has similar throughput for 5.6.51 and 8.4.7
- the second write-only step (l.i2) is 14% slower in 8.4.7 vs 8.4.7
- the range-query steps (qr*) are ~30% slower in 8.4.7 vs 5.6.51
- the point-query steps (qp*) are 38% slower in 8.4.7 vs 5.6.51
| dbms | l.i0 | l.x | l.i1 | l.i2 | qr100 | qp100 | qr500 | qp500 | qr1000 | qp1000 |
|---|---|---|---|---|---|---|---|---|---|---|
| 5.6.51 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| 5.7.44 | 0.91 | 1.53 | 1.16 | 1.09 | 0.83 | 0.83 | 0.83 | 0.84 | 0.83 | 0.83 |
| 8.0.44 | 0.60 | 2.42 | 1.05 | 0.87 | 0.69 | 0.62 | 0.70 | 0.62 | 0.70 | 0.62 |
| 8.4.7 | 0.59 | 2.54 | 1.04 | 0.86 | 0.68 | 0.61 | 0.68 | 0.61 | 0.67 | 0.60 |
| 9.4.0 | 0.59 | 2.57 | 1.03 | 0.86 | 0.69 | 0.62 | 0.69 | 0.62 | 0.70 | 0.61 |
| 9.5.0 | 0.59 | 2.61 | 1.05 | 0.85 | 0.69 | 0.62 | 0.69 | 0.62 | 0.69 | 0.62 |
- the load step (l.i0) is almost 2X faster for 5.6.51 vs 8.4.7 (relative QPS is 0.60)
- the create index step (l.x) is more than 2X faster for 8.4.7 vs 5.6.51
- the first write-only steps (l.i1) is 1.54X faster for 8.4.7 vs 5.6.51
- the second write-only step (l.i2) is 1.82X faster for 8.4.7 vs 5.6.51
- the range-query steps (qr*) are ~20% slower in 8.4.7 vs 5.6.51
- the point-query steps (qp*) are 13% slower, 3% slower and 17% faster in 8.4.7 vs 5.6.51
| dbms | l.i0 | l.x | l.i1 | l.i2 | qr100 | qp100 | qr500 | qp500 | qr1000 | qp1000 |
|---|---|---|---|---|---|---|---|---|---|---|
| 5.6.51 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
| 5.7.44 | 0.91 | 1.42 | 1.52 | 1.78 | 0.84 | 0.92 | 0.87 | 0.97 | 0.93 | 1.17 |
| 8.0.44 | 0.62 | 2.58 | 1.56 | 1.81 | 0.76 | 0.88 | 0.79 | 0.99 | 0.85 | 1.18 |
| 8.4.7 | 0.60 | 2.65 | 1.54 | 1.82 | 0.74 | 0.87 | 0.77 | 0.98 | 0.82 | 1.17 |
| 9.4.0 | 0.61 | 2.68 | 1.52 | 1.76 | 0.75 | 0.86 | 0.80 | 0.97 | 0.85 | 1.16 |
| 9.5.0 | 0.60 | 2.75 | 1.53 | 1.73 | 0.75 | 0.87 | 0.79 | 0.97 | 0.84 | 1.17 |
The insert benchmark on a small server : Postgres 12.22 through 18.1
This has results for Postgres versions 12.22 through 18.1 with the Insert Benchmark on a small server.
Postgres continues to be boring in a good way. It is hard to find performance regressions.
tl;dr for a cached workload
- performance has been stable from Postgres 12 through 18
- performance has mostly been stable
- create index has been ~10% faster since Postgres 15
- throughput for the write-only steps has been ~10% less since Postgres 15
- throughput for the point-query steps (qp*) has been ~20% better since Postgres 13
The server is an ASUS ExpertCenter PN53 with and AMD Ryzen 7 7735HS CPU, 8 cores, SMT disabled, 32G of RAM. Storage is one NVMe device for the database using ext-4 with discard enabled. The OS is Ubuntu 24.04. More details on it are here.
- conf.diff.cx10b_c8r32
- uses io_method='sync' to match Postgres 17 behavior
- conf.diff.cx10c_c8r32
- uses io_method='worker' and io_workers=16 to do async IO via a thread pool. I eventually learned that 16 is too large.
- conf.diff.cx10d_c8r32
- uses io_method='io_uring' to do async IO via io_uring
- cached - the values for X, Y, Z are 30M, 40M, 10M
- IO-bound - the values for X, Y, Z are 800M, 4M, 1M
- l.i0
- insert X rows per table in PK order. The table has a PK index but no secondary indexes. There is one connection per client.
- l.x
- create 3 secondary indexes per table. There is one connection per client.
- l.i1
- use 2 connections/client. One inserts Y rows per table and the other does deletes at the same rate as the inserts. Each transaction modifies 50 rows (big transactions). This step is run for a fixed number of inserts, so the run time varies depending on the insert rate.
- l.i2
- like l.i1 but each transaction modifies 5 rows (small transactions) and Z rows are inserted and deleted per table.
- Wait for S seconds after the step finishes to reduce variance during the read-write benchmark steps that follow. The value of S is a function of the table size.
- qr100
- use 3 connections/client. One does range queries and performance is reported for this. The second does does 100 inserts/s and the third does 100 deletes/s. The second and third are less busy than the first. The range queries use covering secondary indexes. If the target insert rate is not sustained then that is considered to be an SLA failure. If the target insert rate is sustained then the step does the same number of inserts for all systems tested. This step is frequently not IO-bound for the IO-bound workload.
- qp100
- like qr100 except uses point queries on the PK index
- qr500
- like qr100 but the insert and delete rates are increased from 100/s to 500/s
- qp500
- like qp100 but the insert and delete rates are increased from 100/s to 500/s
- qr1000
- like qr100 but the insert and delete rates are increased from 100/s to 1000/s
- qp1000
- like qp100 but the insert and delete rates are increased from 100/s to 1000/s
When relative QPS is > 1.0 then performance improved over time. When it is < 1.0 then there are regressions. The Q in relative QPS measures:
- insert/s for l.i0, l.i1, l.i2
- indexed rows/s for l.x
- range queries/s for qr100, qr500, qr1000
- point queries/s for qp100, qp500, qp1000
I focus on the latest versions. Throughput for 18.1 is within 2% of 12.22, with the exception of the l.i2 benchmark step. This is great news because it means that Postgres has avoided introducing new CPU overhead as they improve the DBMS. There is some noise from the l.i2 benchmark step and that doesn't surprise me because it is likely variance from two issues -- vacuum and get_actual_variable_range.
- throughput for the load step (l.i0) is 1% less in 18.1 vs 12.22
- throughput for the index step (l.x) is 13% better in 18.1 vs 12.22
- throughput for the write-only steps (l.i1, l.i2) is 11% and 12% less in 18.1 vs 12.22
- throughput for the range-query steps (qr*) is 2%, 3% and 3% less in 18.1 vs 12.22
- throughput for the point-query steps (qp*) is 22%, 23% and 23% better in 18.1 vs 12.22
Rotate SSL/TLS Certificates in Valkey/Redis Without Downtime
14x Faster Faceted Search in PostgreSQL with ParadeDB
How to Monitor Kafka with ClickHouse® Kafka Engine
December 09, 2025
How to Turn a MySQL Unique Key Into a Primary Key
December 08, 2025
RocksDB performance over time on a small Arm server
This post has results for RocksDB on an Arm server. I previously shared results for RocksDB performance using gcc and clang. Here I share results using clang with LTO.
RocksDB is boring, there are few performance regressions.
tl;dr
- for cached workloads throughput with RocksDB 10.8 is as good or better than with 6.29
- for not-cached workloads throughput with RocksDB 10.8 is similar to 6.29 except for the overwrite test where it is 7% less, probably from correctness checks added in 7.x and 8.x.
Software
I used RocksDB versions 6.29, 7.0, 7.10, 8.0, 8.4, 8.8, 8.11, 9.0, 9.4, 9.8, 9.11 and 10.0 through 10.8.
I compiled each version clang version 18.3.1 with link-time optimization enabled (LTO). The build command line was:
flags=( DISABLE_WARNING_AS_ERROR=1 DEBUG_LEVEL=0 V=1 VERBOSE=1 )# for clang+LTOAR=llvm-ar-18 RANLIB=llvm-ranlib-18 CC=clang CXX=clang++ \make USE_LTO=1 "${flags[@]}" static_lib db_bench
I used a small Arm server from the Google cloud running Ubuntu 22.04. The server type was c4a-standard-8-lssd with 8 cores and 32G of RAM. Storage was 2 local SSDs with RAID 0 and ext-4.
Benchmark
Overviews on how I use db_bench are here and here.
The benchmark was run with 1 thread and used the LRU block cache.
Tests were run for three workloads:
- byrx - database cached by RocksDB
- iobuf - database is larger than RAM and RocksDB used buffered IO
- iodir - database is larger than RAM and RocksDB used O_DIRECT
- fillseq
- load RocksDB in key order with 1 thread
- revrangeww, fwdrangeww
- do reverse or forward range queries with a rate-limited writer. Report performance for the range queries
- readww
- do point queries with a rate-limited writer. Report performance for the point queries.
- overwrite
- overwrite (via Put) random keys
Relative QPS
Many of the tables below (inlined and via URL) show the relative QPS which is:
(QPS for my version / QPS for RocksDB 6.29)
The base version varies and is listed below. When the relative QPS is > 1.0 then my version is faster than RocksDB 6.29. When it is < 1.0 then there might be a performance regression or there might just be noise.
The spreadsheet with numbers and charts is here. Performance summaries are here.
Results: byrx
This has results for by byrx workload where the database is cached by RocksDB.
RocksDB 10.x is faster than 6.29 for all tests.
Results: iobuf
This has results for by iobuf workload where the database is larger than RAM and RocksDB used buffered IO.
Performance in RocksDB 10.x is about the same as 6.29 except for overwrite. I think the performance decreases in overwrite that arrived in versions 7.x and 8.x are from new correctness checks and throughput in 10.8 is 7% less than in 6.29. The big drop for fillseq in 10.6.2 was from bug 13996.
Results: iodir
This has results for by iodir workload where the database is larger than RAM and RocksDB used O_DIRECT.
Performance in RocksDB 10.x is about the same as 6.29 except for overwrite. I think the performance decreases in overwrite that arrived in versions 7.x and 8.x are from new correctness checks and throughput in 10.8 is 7% less than in 6.29. The big drop for fillseq in 10.6.2 was from bug 13996.
Brainrot
I drive my daughter to school as part of a car pool. Along the way, I am learning a new language, Brainrot.
So what is brainrot? It is what you get when you marinate your brain with silly TikTok, YouTube Shorts, and Reddit memes. It is slang for "my attention span is fried and I like it". Brainrot is a self-deprecating language. Teens are basically saying: I know this is dumb, but I am choosing to speak it anyway.
What makes brainrot different from old-school slang is its speed and scale. When we were teenagers, slang spread by word of mouth. It mostly stayed local in our school hallways or neighborhood. Now memes go global in hours. A meme is born in Seoul at breakfast and widespread in Ohio by six seven pm. The language mutates at escape velocity and gets weird fast.
Someone even built a brainrot programming language. The joke runs deep, and is getting some infrastructure.
Here are a few basic brainrot terms you will hear right away.
- He is cooked: It means he is finished, doomed, beyond saving.
- He is cooking: The opposite. It means he is doing something impressive. Let him cook.
- Mewing: Jawline exercises that started half as fitness advice and half as a meme. Now it mostly means trying too hard to look sharp.
- Aura: Your invisible social vibe. You either have it or you do not. Your aura-farming do not impress my teens.
- NPC: Someone who acts on autopilot, like a background character in a game.
- Unc: An out-of-touch older guy. That would be me?
I can anticipate the reactions to this post. Teenagers will shrugg: "Obviously. How is this news?" Parents of teens will laugh in recognition. Everyone else will be lost and move away.
I have seen things. I am not 50 yet, but I am getting there. I usually write about distributed systems and databases. I did not plan to write this post. This post insisted on being written through me. We will return to our regularly scheduled programming.
But here is my real point. I think the kids are alright.
It is not uncommon for a generation to get declared doomed by the one before it, yet Gen Z and Gen Alpha may have taken the heaviest hit, and written off as lost causes. But what I see is a generation with sharp self-mocking humor. They have short attention spans for things they do not care about. I think they do this out of sincerity. They don't see the purpose in mundane things, or and for many things theyfeel they lack enough agency. But what is new is how hard they can lock in (focus) on what they care about, and how fast they can form real bonds around shared interests. They are more open with each other. They are more inclusive by default. They are a feeling bunch. They waste no patience on things they find pointless. But when it matters, they show up fully.
From the outside, their culture may looks absurd and chaotic. But, under the memes, I see a group that feels deeply, adapts quickly, and learns in public. They are improvising in real time. And despite all predictions to the contrary, they might actually know what they are doing.