👋 Hi there!

My name is Christoph Pröschel, berlin-based full-stack engineer.

🌍 Building AI-powered material visibility at visia.ai.
👨‍💻 Building Grog, the monorepo build tool for the grug-brained developer.

Highlights:
🚀 Co-Founder of weview.tv (Techstars ‘18, exited ‘19)
🧑‍🎓 Msc @ TU Berlin. Thesis: Multi-Agent Reinforcement Learning for Dynamic Climate Policy Games link
Illustration of sysiphos wheeling not a boulder but the Python logo up the hill.

Building the Fastest Python CI

Learn how to build a blazingly fast Python CI pipeline using uv, pex, and Grog. This post shows how to combine uv’s dependency caching with pex’s executable bundling to achieve sub-second build times in Python monorepos. We’ll explore techniques for dependency resolution, cross-platform builds, and efficient Docker packaging - all while keeping the setup lightweight and maintainable.

December 14, 2025 · 11 min · Christoph Pröschel
Illustration of a mountain with four stages of a Python monorepo journey, from 'Valley of poly-repo despair' to 'Build tool nirvana'.

Python monorepo with uv and pex

Read the latest version of this blog post: Building the fastest Python CI With the current hype in AI it has become quite hard to avoid writing python and shipping it at scale. Unfortunately, the python packaging and environment system is so notoriously convoluted that there is even an infamous xkcd comic about it. Enter uv, the rising star of the python community that has succeeded in solving all these problems while also being significantly more performant. But while uv is great at managing your python environment it does not yet have a clear answer for how to bundle and ship them. ...

March 6, 2025 · 5 min · Christoph Pröschel

Master Thesis: Multi-Agent Reinforcement Learning for Dynamic Climate Policy Games

View the full thesis here. Abstract Despite concerted efforts by researchers and policymakers, governments are failing to implement the global coordination needed to implement policies that could avert the disaster of unmitigated climate change. Existing economic models are often ill-equipped to capture the complexities of dynamic, strategic interactions among multiple agents. The research on international mechanisms such as climate clubs, for instance, is often limited to one-shot games due to the combinatorial explosion of sequential negotiation steps. Addressing this gap, this thesis leverages state-of-the-art multi-agent reinforcement learning (MARL) algorithms evaluated on the Rice-N integrated assessment model (IAM). The first approach evaluates the efficacy of the meta-learning opponent-shaping algorithm, ‘Shaper’, in exploiting the learning dynamics of other agents to outperform them in a competitive climate policy setting. Even though Shaper performs well on new economic games introduced here and cooperates in self-play, it fails to achieve the same results on Rice-N. Secondly, the meta-learning ‘Good Shepherd’ algorithm trains a policy that tunes the mitigation efforts and tariff of a climate club that other agents can join or leave unilaterally. This approach produces club structures based on several climate and economic objectives that align with the literature while yielding a novel perspective on dynamic club participation. While these results overall suggest a strong use case for the application of MARL to climate policy, more research into both algorithms and economic models is needed, as well as an interdisciplinary alignment on terminology and goals.

November 13, 2023 · Christoph Pröschel

Lab Grown Meat Is Not a Climate Solution

Few topics spark as much controversy in the climate debate as the assertion that we need to cut our meat consumption if we want to reach our climate targets. On some primal level going for the burger on your plate is a much more outrageous act than going for your car. So it’s no surprise that technical-minded people have been looking for a panacea in the form of lab-grown meat. And the pitch is alluring: If the plan works out the burger is safe and we get to reduce our emissions. We get to have our cake and eat it, too. ...

September 17, 2023 · 10 min · Christoph Pröschel

Viper Paper Implementation

In this article I talk about the implementation of the paper “Verifiable Reinforcement Learning via Policy Extraction” by Osbert Bastani et al.

April 1, 2023 · 1 min · Christoph Pröschel