Building Real-Time Terminal Dashboards: From Excel to Operational Intelligence

  • Home /
  • Portfolio /
  • Building Real-Time Terminal Dashboards: From Excel to Operational Intelligence

The Problem: When Excel Becomes the Bottleneck

Terminal planning at GCT ran on a collection of Excel spreadsheets that had accumulated over years. Volume forecasts were in one sheet. Actual crane moves were in another. Dwell targets were in a third, manually updated by the planning team each morning. There was no connection between the three — no automated refresh, no single source of truth, no way to see the gap between plan and actual in real time.

When a vessel arrived late or a rail cut-off was missed, the planning team spent the first 30 minutes of the morning rebuilding the picture in Excel before they could even start managing the problem. By the time they had a current view, they were already behind.

The ask was straightforward: give the operations team a real-time view of what’s happening, what was planned, and where the gaps are — without changing the workflow they already understood.


The Approach

1. Separate the Data from the Decision Layer

The first decision was architectural: keep the Excel-based planning data as the source of truth for plan values, but connect it to the actual data flowing through Fabric. The existing spreadsheets weren’t broken — the team knew how to use them. The problem was the gap between the plan and reality.

I built a thin ingestion layer that read the planning Excel files on a schedule (every 15 minutes via a Fabric data pipeline) and wrote the plan values into the Fabric lakehouse alongside the actuals. This preserved the existing workflow for planning while enabling the real-time comparison layer.

The key principle: don’t force a workflow change to fit a technology. If the planners are comfortable updating a spreadsheet, let them — just make sure the data pipeline picks it up without manual intervention.

2. Define the Core Metrics First

Before building any visual, I spent two days with the terminal planning team walking through their current morning workflow. I asked them to walk me through a specific scenario: vessel arrives 4 hours late, what’s the first number you need?

The answer wasn’t “crane moves per hour.” It was “how far behind are we on this vessel relative to the schedule, and what’s the downstream impact on the rail cut-off?”

That led to three primary metric views:

  1. Vessel delay tracking — actual vs. scheduled berth arrival, estimated vs. actual departure
  2. Throughput vs. plan — crane moves actual vs. planned for the current window, with cumulative track
  3. Gate queue status — appointment slots filled vs. available, current gate wait times by carrier

Everything else was secondary. I built those three first and let the team validate the logic before adding complexity.

3. Build the Semantic Layer to Handle Complexity

The raw data came from multiple sources: the TOS (Terminal Operating System) for crane moves, the gate system for truck appointments, and the planning Excel files for forecast volumes. Each source had its own schema, its own naming conventions, and its own refresh cycle.

Rather than building report-level joins (which would have made the semantic model fragile and hard to maintain), I built a clean semantic model in Fabric with three measure groups:

  • Ops actuals (from TOS data, refreshed every 5 minutes via Eventstream)
  • Ops plan (from Excel ingestion, refreshed every 15 minutes)
  • Gate status (from gate system API, refreshed every 2 minutes)

The semantic model handled the time alignment (plan data doesn’t have a real-time clock — it has a shift window), the unit conversions (TEU vs. container counts), and the facility filtering (Deltaport vs. Vanterm).

4. Design for the Morning Standup

The primary use case was the 7am planning standup — 10 minutes, three screens, a team that needed to walk in and immediately understand the current state. That shaped the dashboard design:

  • Top row: The three numbers that matter right now — vessel delay (hours), throughput gap (moves vs. plan), gate queue depth (minutes)
  • Second row: Trend charts showing the last 4 hours of crane moves and the cumulative plan vs. actual curve
  • Bottom row: Exception list — vessels behind schedule, gates exceeding wait thresholds, appointments at risk of missing rail cut-off

No interactivity required for the primary view. Everything on one screen. If the team needed to drill deeper, they could — but the first look required zero clicking.


The Build

Data Sources

SourceRefresh RateData
TOS (Terminal Operating System)Every 5 min via EventstreamCrane moves, vessel status, container positions
Gate system APIEvery 2 minTruck appointments, gate wait times, no-show rates
Planning Excel (shared SharePoint)Every 15 min via Data PipelineForecast volumes, vessel schedule, shift targets

Lakehouse Tables

ops_lh/
  ops_crane_moves        — real-time crane move events (5-min refresh)
  ops_vessel_status      — current vessel positions and schedule
  ops_gate_events        — truck gate-in/gate-out events (2-min refresh)
  ops_dwell_times        — computed dwell per container
  ops_plan_volumes        — shift and vessel-level forecast from Excel

hub_warehouse/
  hub_throughput_scorecard  — actual vs. plan by shift, vessel, terminal
  hub_vessel_delay          — delay tracking vs. schedule
  hub_gate_health           — queue depth, wait times, no-show rates

Semantic Model Joins

The semantic model joined on:

  • terminal_id (Deltaport / Vanterm filter)
  • vessel_id (cross-domain: crane moves → vessel schedule → plan volumes)
  • shift_date (plan vs. actual alignment by operational shift, not calendar day)

Report Pages

  1. Operations Overview — the morning standup view (primary)
  2. Vessel Detail — per-vessel drill: schedule, moves completed, ETA, rail cut-off status
  3. Gate Health — carrier-level breakdown of appointment fill rates and wait times
  4. Plan vs. Actual Trend — 30-day rolling view of throughput vs. plan with variance %

The Outcome

  • Decision cycle: From 30 minutes of manual Excel rebuild to sub-2-minute dashboard load at standup
  • Metric coverage: 3 primary KPIs, 12 supporting measures, across 2 terminals (Deltaport + Vanterm)
  • Stakeholders: 15 terminal planners and ops leads using the dashboard daily
  • Plan data refresh: From manual (updated once per shift) to 15-minute automated pipeline
  • Uptime: 99.5% — the pipeline has a retry mechanism that catches failures and alerts the team within 3 minutes

The planning team still uses Excel for the data entry. The difference is that the Excel is now connected — it feeds the lakehouse, the semantic model, and the dashboard — rather than sitting in a silo that requires manual reconciliation every morning.


What I’d Do Differently

Start with the exception view first. I built the overview dashboard and then added the exception list at the bottom. The team told me later that the exceptions are what they look at first — the overview is context. I should have designed the exception list as the top-level view and made the overview the drill-down.

Define the data contract with the TOS team upfront. The Eventstream connection to the TOS required coordination with the IT team responsible for that system. I underestimated the time to get the data sharing agreement in place. The dashboard worked fine once the connection was established, but getting there took three weeks longer than expected.

Build a lightweight mobile view. The gate team wanted to see appointment status from their phones during the shift. I built the desktop view first and planned the mobile view for later. It still hasn’t been built. If I’d factored that in from the start, the gate team would have been using it four months earlier.


Key Takeaways

  1. Preserve the workflow, not the tool. The team was comfortable with Excel for planning. Rather than forcing them onto a new interface, I connected Excel to the Fabric pipeline. The adoption barrier was nearly zero because nobody had to change how they worked.

  2. Real-time is a UX problem as much as a data problem. The 5-minute Eventstream refresh isn’t technically complex — but making sure the dashboard loads in under 2 seconds and shows the three most important numbers without any interaction is hard. The 7am standup constraint should shape the design before the data model.

  3. Exception views outperform summary dashboards. When your audience is managing an active operational environment, they need to know what’s going wrong before they can think about what’s going right. Build the exception list first.

  4. Data sharing agreements take longer than the technical work. The three-week delay on the TOS connection was entirely coordination, not engineering. Budget that time upfront.


This case study reflects the approach I take to operational BI projects — connecting existing workflows to a governed analytics layer without forcing workflow changes. If you’re dealing with similar terminal operations or planning challenges, reach out via LinkedIn.