Blog

The Engineer Stays in Charge. The Spreadsheet Archaeology Goes Away

Last week, Bolo AI founder Diti Sood joined a fireside chat at the World Chemical Forum

Last week, Bolo AI founder Diti Sood joined a fireside chat at the World Chemical Forum with VP and C-suite leaders from chemical companies around the world. One question kept coming back in different forms: when does AI actually work in a plant environment, and when is it just noise?


The short version: AI creates value in industry when it assembles evidence, applies guardrails, and helps domain logic scale. It does not replace physics, plant discipline, or engineering judgment. Here is what that means in practice.

You Do Not Need Perfect Data to Get Value from AI

In every conversation with industrial leaders, someone raises the data problem. SAP says one thing, the historian says another, the CMMS has five years of inconsistent naming conventions, and half the tribal knowledge lives in spreadsheets that one senior engineer maintains on their desktop. The instinct is to say: we need to fix all of this before AI can be useful.

That instinct is wrong. Not because the data problem is not real, but because treating data cleanup as a prerequisite for value means waiting years while the business keeps bleeding time.

The better question is whether the AI system can do four things: connect to fragmented sources where they live, translate messy schemas and inconsistent naming into usable context, expose uncertainty instead of bluffing past it, and keep deterministic engineering logic intact. That is the job of a context layer. It bridges raw industrial data and reasoning. It does not pretend the mess is gone.

You need enough signal, enough context, and strong enough guardrails to make fragmented data usable. But here is the caveat that matters in a plant: if the workflow is high consequence, the system has to know when the data is insufficient. In industrial settings, sounding confident is not the same as being trustworthy. Fluency does not create trust. Traceability does.

Search Answers Questions. Agents Move Work Forward.

Most of what has been deployed as “industrial AI” is still query AI: better retrieval, better summarization, a chatbot that can find things in documents faster. Useful, but limited. It is a smarter search engine, not a change in how work gets done.

The shift that matters is when AI stops returning answers and starts helping complete workflow steps with structure, memory, and constraints. That means identifying likely duplicate work orders, surfacing missing fields in a maintenance request, triaging which issues need planner attention, assembling a work package, or flagging fleet-wide risk conditions that would take an engineer eight hours of manual analysis to surface.

In our work with a Fortune 500 utility, a single query surfaced four at-risk transformers with dangerous dissolved gas readings that had gone undetected for months. That is not search. That is fleet-scale signal detection made usable, across 186,000+ assets and a 23-table schema.

The industry is ready for this shift in narrow, bounded workflows. It is not ready for free-range autonomous agents wandering around a plant making unreviewed decisions. And it should not be. In industrial environments, “agent” should mean: handle the evidence assembly, the repetitive coordination, and the first-pass reasoning so the engineer can make a better, faster decision. The deterministic domain logic stays at the core. AI sits around it, assembling inputs, normalizing data, flagging gaps, and staying inside what the evidence supports.

The industry is ready for agents that reduce clerical and analytical drag. It is not ready, and should not be ready, for agents that bypass accountability.

In Plants, Fast Does Not Mean Reckless

Chemical manufacturers are deliberate by nature. Change management is slow, and for good reason. So, the question becomes: how do you get to real value quickly without cutting corners on the things that matter?

You do not move fast by skipping safety, governance, or operating discipline. You move fast by shrinking scope and being ruthless about where value shows up first.

The wrong approach is enterprise-first transformation: integrate everything, redesign workflows, and hope value appears a year later. The better approach is to pick a painful workflow where people are already spending hours stitching data together manually, where better speed and consistency matter immediately, and where you can prove value in weeks.

In our Hitachi Energy deployment, the first working demo was in front of customers in under three weeks, working against a non-production APM dataset with 23 tables. Production-ready in 12 weeks. That is not a toy problem. That is real operational data at real scale, with traceability and human review built in from day one.

Speed in industry is not about more risk tolerance. It is about less organizational drag between the problem and the first usable result.

The Future Is Engineers with Leverage

Five years from now, a strong reliability engineer should spend a lot less time hunting for information, cleaning exports, reconciling inconsistent records, and manually cross-checking basic patterns. The system will do more of the evidence assembly continuously: monitoring fleet conditions, highlighting anomalies, drafting work recommendations, and flagging where the data basis is weak.

The day-to-day becomes less reactive and less administrative. Fewer hours lost to stitching together SAP, historian trends, inspection notes, and spreadsheet trackers. More time spent on prioritization, root-cause thinking, intervention decisions, and cross-functional coordination.

What does not change is the need for engineering judgment, domain knowledge, and respect for plant reality. The engineer still owns the call. Physics still wins. Operations context still matters. Systems can surface anomalies and discrepancies, but they do not replace the contextual judgment that lives in an engineer’s head and operating experience.

The future is not fewer engineers. It is engineers with leverage. The reliability engineer is still the decision-maker. They are no longer the human middleware between ten broken systems.

AI is useful here. It compresses evidence assembly and makes fleet-scale analysis tractable in a way it was not before. But it is a force multiplier on top of domain logic. It is not the source of
that logic.

That is where the conversation landed in that room at the World Chemical Forum. Not with hype about autonomous plants or magical data lakes. With a practical, grounded view: AI creates value by assembling evidence, applying guardrails, and helping domain logic scale. It does not replace physics, plant discipline, or engineering judgment. It is also the premise Bolo AI was built on: encode the domain context first, then let agents operate inside it.

The highest-leverage use cases today are fleet anomaly detection, maintenance workflow intelligence, and cross-system evidence assembly. Find hidden risk faster. Reduce planning friction. Help engineers get to a defensible answer without six tabs, two exports, and an Excel detour.

The spreadsheet archaeology goes away. The engineer stays in charge.

Diti Sood is the founder and CEO of Bolo AI, which builds the context layer that makes agentic AI work for heavy industry. Before founding Bolo, she spent years as a wireline field engineer at SLB.
GET STARTED NOW

Start asking smarter questions today

See how Bolo AI Copilot can help your teams focus on decisions, not data discovery.

Schedule a Demo