Stewardship, Not Ownership: What I’m Carrying from OpenStack and OpenInfra into the PyTorch Foundation
I’m taking on a new role at the Linux Foundation, leading the PyTorch Foundation at a moment when PyTorch is rapidly becoming one of the defining infrastructure layers of the AI era. I’ve written separately about the role itself and the mission of the PyTorch Foundation. Here, I want to focus on something more fundamental: what I’ve learned over more than a decade working on OpenStack and within the OpenInfra community, and how those lessons are shaping the way I approach this next chapter.
I’ve spent much of my career identifying the infrastructure layers beneath major technology shifts and helping build open source ecosystems around them. When those layers matter, they matter quickly, and the cost of getting them wrong compounds just as fast. This work is about responsibility, stewardship, and making sure success doesn’t outpace the structures needed to sustain it.
What OpenStack and OpenInfra taught me about success at scale
OpenStack didn’t become consequential because it moved fast in its early days. It became consequential because it endured. As adoption grew, it moved from experimentation into production environments where reliability, interoperability, and trust mattered as much as innovation.
Working within OpenInfra taught me that the hardest challenges don’t appear when projects are small. They show up when success changes the stakes—when more organizations depend on the software, when the ecosystem widens, and when instability stops being an inconvenience and starts becoming a risk multiplier. At that point, governance stops being an abstract principle and becomes operational discipline.
Across multiple technology cycles, I’ve seen the same pattern repeat: when innovation accelerates this quickly, the infrastructure layer beneath it becomes the real pressure point. If stewardship lags adoption, fragmentation fills the gap. If it keeps pace, communities can absorb growth without losing coherence. These are learned the hard way, and they’re the ones I’m carrying forward.
From a project to a portfolio: why foundations evolve
It’s also important to be precise about what the PyTorch Foundation represents today. Despite the singular name, the Foundation now stewards a growing portfolio of deeply interrelated projects, including vLLM, DeepSpeed, and Ray, alongside the core PyTorch framework. That expansion reflects where the AI ecosystem has gone.
I’ve seen this transition before. As OpenStack matured into production-critical infrastructure, the OpenStack Foundation evolved to steward additional projects like Kata Containers, StarlingX, and Zuul. Each addressed real needs emerging at scale. Each also made the system more complex. That complexity is not failure but rather a signal that a project has become a platform.
Foundations exist for exactly this moment: not to centralize control, but to provide neutral ground where collaboration can remain open as ecosystems grow larger, faster, and more interdependent.
The PyTorch project is at an inflection point
PyTorch itself is now unmistakably at an inflection point.
Fragmentation doesn’t arrive loudly.
In OpenStack, fragmentation rarely showed up as forks or public conflict. It emerged quietly, through well-intentioned extensions and optimizations that didn’t reconnect upstream. At small scale, that drift is manageable. At PyTorch’s scale, it compounds fast. The challenge is keeping innovation connected to a shared core as the ecosystem expands.
Hardware diversity must not become hardware bias.
PyTorch sits at the center of an explosion in hardware innovation. New accelerators increasingly arrive with PyTorch-first integrations, which speaks to the project’s influence. But I’ve learned that diversity alone doesn’t guarantee neutrality. When performance paths, documentation, or attention tilt too far in one direction, choice narrows even without intent. Stewardship here is about preserving honest abstractions so PyTorch remains a place of real option, not accidental lock-in.
Velocity and stability must scale together.
PyTorch’s research velocity is one of its defining strengths, and it’s reshaping how quickly ideas move from theory into practice. But as PyTorch embeds itself deeper into production systems, instability carries a higher cost. In OpenStack, I watched teams delay upgrades or maintain private patches to protect themselves, unintentionally increasing fragmentation over time. I’ve learned that stability doesn’t come from slowing down but from making change legible, predictable, and survivable at scale.
What I’m carrying forward into the PyTorch Foundation
Many of the signals PyTorch is sending today are ones I recognize immediately: rapid adoption; explosive innovation at the edges; growing dependence from organizations that now treat this software as mission-critical. These are the conditions that demand stewardship that keeps pace with success.
The PyTorch Foundation exists to do that work in the open, across a growing portfolio of projects, without privileging any single contributor, vendor, or use case. My role is to help ensure that governance evolves alongside the ecosystem it serves, not after the fact.
Stewardship as continuity, not control
I’m deeply grateful to the OpenInfra and OpenStack communities for shaping how I approach this responsibility. Those experiences made one thing clear: long-lived infrastructure depends less on ownership than on trust, and less on speed than on care exercised at the right moments.
As I step into this role, I do so with confidence in the community, clarity about the stakes, and a strong sense of what this phase of growth requires. The work ahead is about protecting the conditions that allow open ecosystems to keep building, together, at the scale the world now demands.