C
CloudRig · Discovery Call Tool
Unsaved
Key inputs (edit live)
CRM deal: Not linked
0

Meeting Opener

— read out loud
Read out loud
Here's how I'd like to use our time. I'll ask some questions to really understand your business — what's working, what's not, and what your biggest challenges look like day to day. The more open you are about the pain points, the more accurately I can show you where CloudRig will move the needle.
Once we've nailed those areas down, I'll show you one or two examples of how we've solved those same problems for similar contractors.
If it's clear there's a real fit and ROI potential, we'll wrap by booking a full walkthrough with your team so we can go deeper together.
Does that plan sound good to you?
Capture anything the prospect flags early — specific interest areas, concerns, or how they respond to the plan.
1

Company Context

— initial questions
Prospect identity
What trade do they focus on?
Ask
"I know you focus on [their services] — where does most of your work come from these days?"
Ask
"Can you give me a sense of your current team size and annual revenue?"
Follow-up
"Out of your total employees, how many are office or admin employees?"
Auto: total − office.
Follow-up
"And how many estimators do you have?"
Auto: projects/yr ÷ win rate (§10).
Ask
"Roughly how many projects did you complete this year?"
Auto: annual revenue ÷ projects/year.
Ask
"Looking out over the next couple of years, what are your growth and profit goals for the business?"
Ask
"What software are you currently using for accounting, estimating, and tracking work in the field?"
Three answers — fill each dropdown.
SOP Probe — a typical day onsite
  • How does the foreman set and communicate daily goals?
  • What info do they report at end of day?
  • How do they do that reporting?
2

Production Visibility

— Daily Report · AI Journal
Pain
  • When do you know whether a crew is hitting their production target? Weekly?
  • How soon after a shift do you typically see what actually got done?
  • Are you reviewing actual vs. bid costs weekly, or just at closeout?
Impact
  • What happens if you don't see a problem until the weekly/monthly meeting?
  • What's the risk when overruns aren't caught until the end — rework, lost margin, missed change orders?
  • Can you think of a job where seeing the numbers sooner would've changed the outcome?
Value Anchor
  • If you could see progress daily instead of weekly, how would that change how you manage jobs?
  • What can those losses add up to on a typical job? 1 or 2% of estimated profit?
  • Roughly what's one day of lost production worth on a typical job? $10K?
Typical 1–2% of revenue. Drives the ROI panel.
Demo: Daily ReportDemo: AI Journal
3

Field Reporting

— Job Review · AI Journal · Scoreboard
Pain
  • How often do your foremen actually fill out reports on time?
  • What tends to keep them from doing it — too busy, too complicated, not worth their time?
  • How does the field see their own targets and scoreboard?
Impact
  • Who ends up chasing missing reports or fixing them later?
  • How much time does that chew up in the office every week?
  • How much time does payroll or back-office spend reconciling field time each week before it's clean enough to run?
  • If guys in the field could see their own scoreboard, how would that change how they plan their days?
  • How do they know when they need to push their guys?
Quantify what they'd get back. Drives the ROI panel.
Value Anchor
  • If you could get 100% of reports in accurately, how would that affect your admin or PM workload?
  • Would that save hours per day in the office?
  • How much more productive could your field be?
Labor efficiency opportunity
  • If the field had live scoreboards and faster course-correction from the office, how much more productive could the crews be?
  • Would a 5–10% labor / equipment productivity lift be realistic?
5–10% with better visibility. Labor / equipment spend assumption lives in §10.
Demo: Job ReviewDemo: AI Journal
4

Bad Data

— AI Journal (accurate coding)
Pain
  • How confident are you that what's coded in the field matches what was actually done, on a scale from 1 to 10?
  • Do you ever find cost codes getting mixed up between similar activities?
Impact
  • When that happens, what does it cause downstream? Does accounting fix it, or does it throw off job-cost reports?
  • How much cleanup does the office have to do before it's ready for billing?
  • How does that affect future bids or estimating?
Drives the ROI panel.
Value Anchor
  • If every hour and cost was coded right the first time, how would that change your ability to bid tighter or analyze margin by crew?
  • How much faster could you spot issues and get through daily workload if the office didn't need to clean up field data?
Missed change orders
  • How often does scope creep or a change happen in the field that doesn't make it back to the office for billing?
  • With production-grade data on every job, what % of revenue could you recover in change orders you'd otherwise miss?
Typical 0.5–2% of revenue.
Demo: AI Journal
5

Real Historicals for Bids

— Activity Views
Pain
  • How often do you review historical production rates before bidding new work?
  • Do you trust the field data enough to base future bids on it?
Impact
  • If your historical data's off, does that make your bids more conservative or too aggressive?
  • What happens if you underbid a line item by even 5%? Do you eat it, or can you catch it mid-job?
Value Anchor
  • If you could trust your field history to tighten bids even 2–3%, how much more margin could that unlock per year?
  • Would that let you price more competitively without risking the job?
Typical 2–3%. Drives the ROI panel.
Demo: Activity Views
6

Manual Estimating

— Estimating · AI Historicals
Pain & Impact
  • How long does it take to build an average bid from start to finish right now?
  • Where does most of that time get lost — pulling together crews, finding costs, chasing historical data?
  • Does it ever feel like only your best estimator really knows how to price certain work? What happens when they're out or pulled onto another bid?
  • What happens when a bid takes too long? Do you miss deadlines or just bid fewer jobs?
  • If you could bid 20% more work with the same team, what would that mean for your pipeline?
  • How often do bids come back with missed spec items or quantity errors that turn into surprise costs mid-job?
Value Anchor · revenue first
"Let's say you were able to win just one extra job a month because you're getting more bids out the door — what would that add up to in annual revenue?"
"And how much faster do you think your team could turn bids around if your assemblies were already pre-built with your crew rates?"
"If every estimator on your team could bid off the same accurate assemblies and historicals — not just your best one — what do you think that would do to your win rate on competitive work?"
Primary revenue anchor. Avg project size pulls from §1; margin conversion lives in §10.
Improved bid win rate
"If every estimator could bid off the same accurate assemblies and historicals — not just your best one — what would that do to your win rate on competitive work?"
  • Could you move win rate 3–7 percentage points from where it sits today?
  • Applied to total bid volume after the volume lift above.
(Total bids incl. extra volume) × uplift × avg job × margin.
Fewer surprise costs — spec & quantity accuracy
  • How often do bids come back with missed specs or quantity errors that turn into surprise costs mid-job?
  • If you could cut that in half with better structure and historicals, what would that save you per year?
Typical 0.25–1% of revenue — separate from bid tightening in §5.
Demo: EstimatingDemo: AI Historicals
7

Documentation & Risk Reduction

— AI Journal · Job Review · audit trail
Pain
  • When a claim or change-order dispute comes up, how easy is it to pull together the documentation you'd need to defend your position?
  • If you had to produce a clean audit trail — daily production, labor, equipment, materials — how long would that take today?
  • How much of your documentation lives in one person's head or on paper notes that never make it back to the office?
Impact
  • Have you ever lost a claim — or settled early — because the documentation wasn't there?
  • How much does an audit, or a compliance review on a public job, cost you in time and legal fees today?
  • When your best superintendent or estimator is out, what stalls? How much of the risk knowledge walks out the door with them?
Value Anchor
  • If you had a complete, timestamped audit trail on every job — photos, production, equipment, crew — how would that change your exposure on claims?
  • Would better documentation open you up to higher-margin work with DOTs, municipalities, or big commercial GCs that demand it?
  • How much would it be worth to stop relying on one estimator's tribal knowledge — to have the pricing logic actually documented in the system?
Demo: AI JournalDemo: Job Review
8

Wrap Up

— initiatives · timeline · decision-maker
Ask
"Are there any other initiatives right now that are competing for your team's budget or time?"
Ask
"In a perfect world, when would you want to have something like this fully rolled out to the field?"
Ask
"And who would typically make the call on bringing in a tool like this at your company?"
9

Book Demo

— attendees · objection handling
Say
"Next step is a one-hour walkthrough of the tool. Here's who we typically need to attend —"
  • Owner / main decision-maker
  • COO or VP of Ops
  • One rep from the project management org
  • One rep from the field (usually a superintendent)
  • Controller (for payroll questions)
If they object — "Let me check the team's availability"
"Totally — that makes sense. Let's put a hold on the calendar for next week. If it turns out that time doesn't work or the team's not ready, we'll move it. That way we don't lose momentum. Typically a good time for this is right after your weekly Ops meeting — when does that happen for you?"
If they still object
"I hear you. To help you make the case, I can cut a couple 2-minute clips from the demo today and send them to the team to communicate what we do. What would [owner] be most interested in?"
"Great — I'll send those, give you a few days to talk, and call you at [time] to confirm the demo slot. Does that sound good?"
10

Additional Inputs

— assumptions, not asked on the call

Background assumptions the ROI model uses. Adjust if you have better data for this prospect — otherwise defaults work for most heavy civil contractors.

Office labor rate
Burdened hourly rate for office / admin staff — used to value time saved in the Field Reporting and Bad Data buckets.
Typical heavy civil office/admin rate with burden: $40–$55.
Bid volume & win rate
Typical bid win rate — drives how many total bids we assume the team produces to hit their project volume. Estimates / year (shown in §1) auto-derives from projects/year ÷ win rate.
Typical heavy civil: 20–30%. Adjust if you know it.
Margin assumption
Contribution margin used to convert revenue uplift in Manual Estimating into a margin-equivalent ROI contribution (avoids double-counting revenue vs margin in the total).
Typical heavy civil contribution margin: 10–15%.
Labor & equipment spend
Share of revenue that goes to labor + equipment. Used with the Section 3 efficiency gain input to size the labor efficiency opportunity.
Typical heavy civil: 55–65%.