ET Newsroom
Episode 7

ET Talk: Driving UT Forward — Building Enterprise Systems for Change

Episode 7 turns inward to the team that runs the systems UT depends on every day, and asks the harder question underneath: how do you stay durable enough to be trusted, and adaptable enough to keep up with a campus whose needs no longer fit the assumptions baked into the original code.

April 16, 2026 Enterprise Technology ET Talk Podcast
ET Talk
Driving UT Forward: Building Enterprise Systems for Change
Graham Chapman leads the team responsible for Workday, Define, the mainframe student system, and the systems that hold the business of UT together. He and Cole Camplese examine the homegrown talent pipeline behind those systems, the discipline modern platforms ask for in exchange, and how AI is reshaping the calculus for engineers built around durable, long-lived infrastructure.
Format
Colleague conversation
Episode focus
Enterprise systems, organizational design, and the discipline of modernization
Guests
Cole and Graham Chapman

In Episode 7, Cole Camplese sits down in the Union Underground studio with Graham Chapman, who leads Enterprise Platforms at ET — the team behind Workday, Define, the mainframe student system, and the long stretch of integrations between them. The conversation moves from Graham's liberal-arts path into enterprise IT, through the homegrown talent pipeline that built UT's most distinctive systems, into the trade-off he calls "buying discipline, not capability," and lands on what AI is actually doing inside a team built around fifty-year systems.

Liberal arts in the engine room

Episode 7 starts the way a lot of the best ET Talk episodes do — with the path the guest took to get here. Graham Chapman did Plan II and philosophy at UT, with a minor in history, and came back later for a master's in human dimensions of organizations. He grew up as a faculty brat at Albuquerque Academy, where his father was an administrator, and spent his college summers leading outdoor trips. By his own account, he has spent most of his life around the periphery of the academy, learning what he calls the "language" of it, becoming comfortable with adults before he was comfortable with kids his own age.

That background shows up in every part of the conversation. Graham frames technology as the backdrop for groups of people doing things together — including things they do not yet understand they want to do, or are afraid to. Cole picks up the same thread: a lot of ET's work, he argues, is about lowering the temperature on the click‑okay/cancel dialog and giving the campus community courage to try new things. The audience for these systems is rarely fearless. The job is partly to make it safer to be curious.

Bimodal by design

Graham's portfolio at Enterprise Platforms is, as Cole puts it, essentially everything that keeps the business of UT running — Workday for HR, Define for finance, a mainframe student system, and roughly fifteen years of additional vendor systems layered on top. The team has to operate in two registers at once. The post-2015 portfolio is mostly software-as-a-service and cloud. The decades-older systems are mainframe and homegrown. Both modes have their own cadence, their own failure modes, and their own definition of what "modern" even means.

What stitches the two halves together is people. Graham has about 140 staff, the majority of them generalists with a deep partnership orientation toward the business. Many are full-stack software developers and engineers who can handle anything technically — but whose first instinct is to ask what institutional need they are meeting, not what technology they want to use. The portfolio looks bimodal from the outside. The culture is unified.

The homegrown talent pipeline

For decades, UT chose not to compete head-to-head with the market for computer science graduates. Instead, the team built its own talent pipeline. Hires were people who had already established themselves in another career — second-career professionals or aspiring academics who pivoted, drawn to staying connected to a university. They took an aptitude test, then went through a six-month software developer training program followed by an eighteen‑month apprenticeship. Many were embedded directly with the business offices: the libraries, the registrar, the CFO, wherever the work was. Graham himself came into this line of work the same way.

Cole compares the model, half-jokingly, to "Google before Google" — but the better comparison may be Apple Academy. UT was, for a period, a place other institutions visited specifically to learn how this was being done. The pipeline produced engineers oriented toward process design and analytical work as much as toward programming. It also produced systems — Define being the most cited example — that were beautifully tailored to the way UT actually worked.

That tailoring was the strength. And, Graham is candid, eventually it became the constraint. Many of the assumptions baked into systems built to digitize 1980s and 1990s paper processes simply no longer match how UT needs to operate. The metaphors live on inside the code long after the practical workflows have moved on.

Buying discipline, not capability

The episode's most quotable framing comes here. When UT modernizes a system — say, replacing Define with a SaaS finance platform — the trade isn't more capability. The team is, in Graham's words, "buying discipline." A two-click SaaS workflow does not necessarily do more than a homegrown one. It does it in a more standard, less entangled way. Which means when the next compliance rule arrives, or a provost wants to launch a six-week certificate program, or the funding model for interdisciplinary programs changes, the system can move with the institution instead of resisting it.

Graham has a crisp name for the pattern that gets in the way: a "language-behavior mismatch." Organizations celebrate the idea of agility while steadily investing in exactly the kind of extreme tailoring that makes agility impossible. UT, with its highly federated campus, is especially prone to this — the long tail of bespoke local optimizations means that even routine changes require extraordinary coordination across stakeholders. Cole offers two concrete examples: a six-week certificate program collides with a mainframe student system that hardcodes the difference between "long" and "short" semesters. Interdisciplinary programs collide with information architecture that insists there can only ever be one decider of revenue.

  • Modernization buys adaptability — fewer customizations now in exchange for the ability to evolve faster later
  • Language-behavior mismatch: institutions that say "agility" while investing in extreme tailoring get neither
  • Federated optimization at every layer creates a long tail of complexity that resists every change

Conway's Law and the consolidation experiment

Two years ago, Graham's portfolio was focused on the academic technology pipeline. Bringing finance and operations systems under the same leadership — pulling work that used to be embedded in the CFO's and Provost's portfolios into a unified Enterprise Technology — was a deliberate experiment in a more coherent organization. Eighteen months in, Graham is starting to see the early evidence that it is working: more collective capability, more cross-pollination, more opportunities to build shared utilities at enterprise scale rather than re-implementing them inside each domain.

His shorthand for why it matters is Conway's Law — the principle that organizations tend to produce systems that mirror their own structure. If you want systems that interoperate across departments, you eventually need teams that collaborate across departments. The flip side, which Graham is careful to name, is that consolidation does not automatically produce integration. Relationships and strategic partnerships with the business areas that used to be self-managing now require conscious, deliberate attention. Some of what used to happen by walking around now needs structure. The next step, he hints, is introducing embedded business relationship management to keep the proximity that consolidation could otherwise erode.

AI as multiplier — and as mirror

Graham describes a team whose culture is steeped in building durable systems — things designed to last fifty years. That culture has enormous value. It also carries a bias toward stability over experimentation. AI is disrupting that in two ways at once.

The first way is the obvious one: capability augmentation. The team is using AI assistance for coding, and more broadly for work augmentation — moving faster, reaching decision points sooner, processing multi-document analytical work like comparing vendor contracts and reconciling cost structures across different pricing models that used to consume hours. Cole describes his own week in Claude Code: the ability to function as a developer he never quite was, because the tools now meet his level of system understanding halfway.

The second disruption is subtler and, in Graham's view, possibly more important. Periods of rapid change make it easier to interrogate assumptions that would otherwise stay invisible — the fish that doesn't know what's in the water. A specific example: the team's documentation has historically been optimized for experts. The developer maintaining the system. The business user who lives in it daily. UT has not had time to build the high-level explainers, useful metaphors, and accessible onramps for everyone else. AI makes it possible to repackage existing institutional knowledge for new audiences without rewriting it from scratch. That is not a narrow productivity gain. It is a structural shift — clawing back time from technical debt so the team can leave short-term delivery mode and actually learn, experiment, and adapt.

Stick shift and self-driving

The episode closes on a question neither Cole nor Graham fully resolves, and it's a fitting one for a conversation about durable systems: what happens to the underlying skills when the abstraction layer gets good enough to substitute for them? They trace the same arc — early personal computers, learning BASIC and Logo before HTML or JavaScript, developing an intuition for how systems work from the inside out. That foundational experience is what makes them effective at directing AI tools now. The open question is whether someone entering the workforce without those formative experiences will arrive at equivalent intuition through a different path, or whether something important is quietly lost.

Graham's framing is the same one he uses for the customization question: abstraction is a feature, not a flaw — but only when the person working at the abstract level understands what is happening underneath. A developer who can describe a desired outcome with precision, both the front-end behavior and the back-end logic, can use these tools at full power. One who cannot will eventually hit a failure mode they can't diagnose.

The analogy they land on is learning to drive on a stick shift before owning a car with full self-driving. The autonomous layer is trustworthy because of the foundational experience underneath, not in spite of it. It is an honest place to end — and a question Enterprise Technology, as the team responsible for both building infrastructure and enabling the people who depend on it, is working through in real time.

AI-assisted draft

This story was developed with AI support as part of the writing and editing workflow.