I’m sharing these ideas as concepts I’d be excited to see programming languages explore. They represent different ways to push implementation details into the language or runtime, allowing developers to focus more on intent rather than mechanics.
1. Incrementality
The Problem
When a small input to a large function changes, we typically recompute everything. This is inefficient and wastes computational resources.
The Solution
Incrementality allows us to track dependencies and only recompute what’s necessary.
In UI frameworks like React, incrementality means only re-rendering components affected by state changes. In database systems, it means only recomputing affected query results. As a language feature, this would mean automatically tracking dependencies between data and derived values across your entire program, eliminating manual optimization work.
Imagine writing code that declaratively describes a transformation (like server state → UI state), and having the runtime automatically ensure they stay in sync, computing only what changed.
Consider a concrete backend example: an API endpoint that returns a list of 100 serialized objects as JSON. Currently, when a single object changes, we typically serialize the entire collection again. With incrementality as a language feature, the system would automatically detect which objects changed and only re-serialize those specific objects, significantly reducing computational overhead for frequently accessed endpoints.
While frontend frameworks like Svelte already implement this idea, it should become a core feature for backend systems too. Caches and indexes are perfect examples of derived states (data that’s calculated from other data) that could be managed declaratively, with the language handling the incremental updates automatically.
For those interested in the concept, please incr_dom Incremental at Jane street with follow up work Bonsai and , Salsa in rust
2. Auto Backward Differentiation
The Concept
Automatic differentiation is a technique that calculates derivatives of functions by tracking how values change through each step of computation. In simpler terms, it automatically answers the question: “If I change input X slightly, how will that affect output Y?”
In machine learning, this powers gradient descent by determining how to adjust model parameters. But its potential is far broader - it enables any algorithm to understand its own sensitivity to inputs and optimize itself accordingly.
Libraries like PyTorch and TensorFlow allow developers to write forward computations while automatically generating the backward pass for gradients. This powerful capability should be a language-level primitive rather than confined to specialized libraries.
Swift has already started exploring this direction with its @differentiable attribute. By making differentiation a core language feature, developers could write regular functions and automatically get their derivatives without additional work.
This would be extremely valuable beyond machine learning. Imagine optimizing traditional algorithms with learned components, like
automatically tuning the pivot selection in a quicksort algorithm based on data characteristics, eg. yielding an average 3.38x performance improvement over C++ STL sort
learning better indexing structures in databases , eg. outperforming cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over several real-world data sets
3. Auto Reversibility
The Idea
When you write an encoding function, the language could automatically generate the corresponding decoding function, saving development time and reducing errors.
Not all encodings are reversible (mapping to a constant is lossy), but the type system should warn you about this. For composable transforms where each step is individually reversible, the language should be able to generate the reverse operation automatically.
This is particularly valuable in serialization scenarios where you often need to write both encoding and decoding logic. The language could warn you when an encoding scheme isn’t reversible and automatically generate the decoding code when it is. Like with auto differentiation, this would create a new class of guarantees where the language ensures round-trip conversions work correctly—something currently left to unit tests.
Conceptually, auto reversibility shares foundations with auto differentiation - both analyze code structure to generate new code with related but different behavior. While differentiation focuses on how outputs change when inputs change, reversibility focuses on how to get back to original inputs from outputs.
For composable transforms where each step is individually reversible, the language would automatically combine the individual reverse operations in the correct order. This creates powerful guarantees: data that goes through a series of transformations can always be accurately restored, something currently left to extensive testing.
For those interested in this topic checkout, Boomerang lang, “lens” library in haskell.
4. Speculative Execution
The Concept
In CPU design, speculative execution transforms linear sets of instructions to parallel ones by predicting branch outcomes and executing instructions in parallel before knowing if they’re needed. This technique delivers 10x improvements in instructions executed per second by bypassing sequential bottlenecks, watch here for Jim Keller explain this beautifully. How much we are able to parallize depends on the quality of our prediction, and it stops at I/O boundaries.
What if our programming languages had primitives for speculative execution at a higher level? When your program hits operations that can’t be parallelized (like waiting for user input or network calls), the runtime could predict likely outcomes and continue processing dependent code paths in parallel.
This would transform programming by dramatically improving perceived performance in user-facing applications, potentially reducing wait times by orders of magnitude without complex custom implementations.
What makes this particularly exciting now is that modern machine learning and deep learning techniques are dramatically improving our ability to predict outcomes. Where CPUs use simple branch prediction, language runtimes could leverage sophisticated ML models to predict entire datasets, user inputs, or API responses with increasing accuracy. This could transform inherently sequential programs into highly parallel ones, potentially delivering order-of-magnitude performance improvements for previously bottlenecked operations.
Here’s how an interface might look:
interface executable_func {
call() -> blocks
}
interface speculative_executable_func {
call_speculatively() -> non-blocking pre-work
cancel()
confirm() -> potentially blocking
}
The key to making this secure is that speculative work should not produce visible side effects until explicitly confirmed. The “cancel” semantics are crucial—they must provide clean, reliable ways to exit speculative execution paths without external systems ever seeing the effects of canceled operations.
A real-world example: Instagram used to speculatively upload images before users completed their posts. Since machines are so much faster than humans, they can do significant work while a user is still making decisions.
Imagine a user signup form that speculatively initializes all the infrastructure needed, canceling if the user abandons the process. By the time the user completes the form, the system is already ready.
5. External State Management
The Challenge
Currently, program state lives inside a process’s memory. If the process dies, state must be explicitly persisted outside so it can be retried. Long-running computations, retries, and multi-process coordination require hand-crafted state machines.
The Solution
Orchestration systems like Airflow, Metaflow, and Temporal address this problem, persisting state outside the running program to survive program crashes.
State management could become a language-level primitive, integrating this directly into languages would make these patterns more accessible and reliable. The runtime would automatically track state and persist key checkpoints outside the process.
A Unified System
These concepts work very well together when combined. Speculative execution predicts outcomes to bypass sequential bottlenecks, but it becomes even more powerful when paired with incrementality and auto-reversibility.
Incrementality ensures that when predictions are wrong, we only need to recompute the specific parts affected by the misprediction rather than throwing away all speculative work. This dramatically reduces the cost of speculation errors.
Meanwhile, auto-reversibility provides a safety mechanism for rolling back side effects from incorrect speculations. When a prediction proves wrong, the system can automatically generate and execute reversal operations, cleanly undoing partial work without developer-written cleanup code.
Together, these three language features could transform how we approach performance optimization: speculative execution provides parallelism where none existed before, incrementality minimizes wasted computation when predictions fail, and auto-reversibility ensures the system remains consistent throughout. This combination would allow developers to write simple, sequential code while the language runtime automatically delivers parallel performance with graceful error handling.
Philosophy and Design Challenges
Great programming languages make it easy to describe “what” we want to achieve while providing understandable guarantees about “how” it gets achieved. Many languages today force programmers to handle both aspects, often requiring explicit control of implementation details that could instead be managed by the runtime.
When I talk about adding “magic” to the runtime, I acknowledge that it’s a high bar to cross.
Developers often fall back to more “understandable” approaches when things aren’t done well. But this doesn’t need to be the default state. Consider how most programmers don’t think about the fact that their code gets pipelined, executed out of order, and speculatively executed on CPU hardware—because this is done extremely well at the hardware level.
The hardware provides an ‘imperative illusion’: programmers write sequential code, but the CPU executes it differently while rigorously upholding a contract (the ISA) that guarantees the observable results match the sequential model. Because this contract is reliable, developers trust the abstraction. I believe similar abstractions can work at the programming language runtime level too, allowing developers to write straightforward code while the runtime performs complex optimizations transparently, provided its behavioral contract is equally strong.
That said, these new features introduce several challenges:
Predictability vs. Optimization: If the language/runtime contractual promises aren’t kept, developers end up “fighting the compiler” rather than benefiting from its intelligence. This is similar to what Horace He describes with compiler optimizations—when they don’t work as expected, users are forced to understand implementation details they shouldn’t need to care about.
Control vs. Convenience: The key insight is that a good programming model provides guarantees users can rely on. Rather than adding compiler magic that sometimes works, we should design programming models where the optimizations are part of the contract with users. For example, with speculative execution, we need clear semantics around cancellation to ensure side effects remain invisible until confirmed.
Debugging Complexity: When things go wrong with magic features, higher levels of abstraction can make it harder to understand what happened. Users need clear mental models and tools to reason about behavior.
Performance Characteristics: These features may introduce overhead or unpredictable performance profiles that make systems harder to reason about. A successful approach would make performance characteristics part of the programming model.
The success of speculative execution in CPUs suggests these techniques are broadly useful. The challenge isn’t whether these optimizations are possible, but how to expose them through programming models that give users reliable guarantees based on a clear contract, without forcing them to understand implementation details.
Conclusion
There are many more interesting directions to explore in programming language design beyond what I’ve outlined here. The ideas in this post represent just a few of the possibilities that excite me. I look forward to seeing how the programming language landscape evolves as researchers and language designers continue to push the boundaries of what’s possible, making programming both more powerful, accessible, and performant.