ET Mueller

Data

Reflections on technology, artificial intelligence, and the quiet logic behind decision-making. Notes from both the human and the algorithmic sides of the equation.

1. The Grammar of Prediction

Published

The problem with risk is language. Everyone speaks a different dialect, so no one truly understands each other. The ostensible solution was to implement a recognized standard (ISO 31000, COSO, NIST, COBIT, FAIR, PMBOK). But in many organizations it simply didn’t work. The first problem is that the standards differ on fundamental questions, making it hard to know which to choose. COSO emphasizes governance, tying risk to strategy and performance with strong board oversight. ISO presents risk as decision-making at every level, embedding it into daily choices. NIST focuses on controls and compliance, with defined system-level responsibilities. Isn't the truth somewhere in between? The second problem is that risk practitioners and business leaders talk past each other for two main reasons. First, misaligned incentives. Risk teams are rewarded for assuring the process, while business leaders are judged on making the right call. Second, as Hofmann and Scordis (2018) point out, theory and practice diverge when risk concepts clash with how organizations actually work, making it even harder to bridge specialists and decision-makers. So what are the solutions? Here are 10 Rules for Applied Risk Management: Fit for Purpose: Risk management is not one-size-fits-all. Borrow what works, discard what doesn’t, and keep it simple. Manage Decisions: Don’t manage the risk, manage the decision. Anchor the conversation in options, trade-offs, and possible states of the world. Think Like the CIA: The risk analyst’s job, like an intelligence officer’s, is to provide wisdom, clarity, and insight. Situational Awareness: Put in place systems for perceiving what’s happening, comprehending how the pieces fit, and projecting how the situation will evolve. Front-Line Decisions: Risk management should be embedded where the choices are made. Otherwise, it is like learning Esperanto: clever in theory, useless in practice. Speak Plainly: Use simple words. Risk managers should use the vocabulary of the business, not the other way around. Align Incentives: The decision-maker must carry both risk and reward. Good incentives tie choices to consequences, pushing people toward optimal solutions. Shared Values: When organizations share common beliefs and values, risk management becomes a thrust, not a drag on good decision making. Use Heuristics: Experienced managers rely on heuristics. A satisficing choice that’s “good enough” beats paralysis. Don't over-rely on Data: Models and metrics are tools, not masters. They should inform human judgment, not replace it. At the Tower of Babel, God punished the builders’ vanity by fracturing their language so they could no longer work together. Risk management today feels much the same: divided by jargon, fragmented by frameworks, proud of our theories yet unable to deliver the clarity leaders need to make better decisions. In our case, the punishment is not divine; it’s self-inflicted. References Michael Power, The Risk Management of Nothing (2009) Annette Hofmann & Nicos Scordis, Challenges in Applying Risk Management Concepts in Practice (2018)

Quality Mgmt is Risk Mgmt

Published

In some trading firms, people spend a disproportionate amount of time firefighting avoidable mistakes. Hedges are put on the wrong way. Recaps miss key terms. Demurrage claims are time barred. Invoices are sent with an incorrect price. Each mistake consumes hours of precious time. The solution to managing these operational risks is to focus on quality, provided the cost of management stays below the value of errors prevented. But quality management is difficult and it is tempting to give up after a few lukewarm attempts. After all, it takes a lot of time to fix process issues and implement controls. But when practiced consistently, the benefits of a quality program compound. One of the biggest underlying issues that prevents continuous improvement is a lack of incentives. If the costs of errors aren’t borne by those who create them, they persist. When incentives are misaligned, mistakes spread like weeds. The “broken windows” theory (contested in criminology but still useful as metaphor) shows how small failures, left unchecked, normalize substandard behavior. Standards slip, errors multiply, and quality erodes. The way out of this vicious circle is the virtuous cycle of Kaizen. Quality management follows a rhythm: plan improvements, put them into practice, check results, and adjust. Over time, errors fall, processes accelerate, and the opportunity cost of people’s time is released for more strategic work. That reclaimed time is immensely valuable: it sharpens situational awareness, lifts morale, and creates a flywheel where fewer errors lead to fewer new errors. When employees are embedded in a high-quality process, they contribute more, stay longer, and care more deeply. People stop saying “that’s not my problem.” They can enter a state of flow at work which is characterized by a seamless integration of action and awareness. In a Michelin-starred restaurant, excellence is systemic. Mise en place ensures every ingredient, tool, and station is ready before service. Every detail counts: the timing of the kitchen, the precision of the sommelier, the lighting that sets the room. A single weak link spoils the whole experience. Likewise, in a trading company, quality must run through every stage of the deal lifecycle: contract drafting, risk checks, vessel nominations, LC issuance, and final settlement. When it does, counterparties notice. Just as diners return to a restaurant where every detail is right, clients come back to traders whose execution is consistently flawless. Quality management is Reinforcement Learning applied to organizations. Machines learn through reinforcement not by avoiding mistakes, but by making them, measuring them, and adjusting course. In actor–critic algorithms, the actor takes decisions, the critic evaluates outcomes, and the policy improves over time. If computers can be trained to master complex environments by relentlessly cycling through error and adjustment, trading firms can do the same, and more.

3. The Bias of the Machine

Published

Data is not neutral — it remembers who collected it and why. A meditation on bias, ethics, and the quiet politics embedded in every training set.

4. The Paradox of Scale

Published

The bigger the model, the less we understand it. This essay looks at scale as both a technological and moral problem — and how interpretability might become the next frontier.

5. Human in the Loop

Published

Between automation and oversight lives the human loop — the soft space where intuition, trust, and imperfection hold systems together.

6. The Mirror of the Model

Published

We build models to understand the world, but often end up studying ourselves. This final piece explores the mirror effect of data — how technology reflects our collective imagination.


Working Notes

Draft fragments, research notes, and ongoing thoughts connecting AI, risk, and philosophy.

As you expand this section, consider linking deeper essays (e.g. on MARL, interpretability, or ethics) from Substack, Ghost, or external publications.