An AI-native field-operations platform for licensed landscape, turf, and water-management businesses.
VerdaKai is the connective tissue between an office that designs spray programs and a truck cab that runs them. It replaces seven or eight parallel systems that a regulated lawn-and-ornamental operation typically duct-tapes together — paper route sheets, a compliance binder under the seat, a property file system, an HOA map texted from the office, a price list pinned to a corkboard, an inventory tracker on a clipboard, a scheduling spreadsheet, and a tech's improvised plant-ID search history — into one application that runs end-to-end from the road to the boardroom.
The product was designed against a real regulated business: a Florida licensed L&O operator running both a maintenance crew and a spray crew, treating residential lawns and HOA portfolios under FDACS applicator licensing. Every screen reflects a specific field-ops workflow rather than a generic CRM abstraction. The closer you look, the more workflow-specific it gets.
Three commitments set it apart from the off-the-shelf alternatives:
It is AI-native, not AI-bolted-on. A tech can photograph a plant, a pest, or a discolored patch of turf and get a species-level answer in under five seconds — phrased as an identification or as a diagnosis with treatment options drawn from products the company already stocks. Admin can teach the AI anything it should know about the business; the lesson is applied to every future answer, companywide. Every AI surface names which model produced its output and an honest accuracy band beside the value, so the operator knows when to trust and when to verify.
Compliance is a first-class citizen. FDACS applicator licenses, Certificates of Insurance, W-9s, workers-comp policies, business licenses, and continuing-education unit progress live in the same place as the crew roster. One click prints an HOA-ready compliance PDF with an AI-generated posture summary at the top.
It integrates with Program Designer Suite (PDS) instead of replacing it. PDS is the desktop administration surface — a full spray-program designer, employee manager, equipment tracker, and weekly scheduler. VerdaKai is the mobile field surface. They share state through Supabase, so a program designed at the desk is on the tech's phone by the time they start the truck.
The Site Survey is the single most opinionated module in VerdaKai and the most technically dense. It exists because the most expensive and least reliable line in any L&O quote is the operator's estimate of square footage — drive-bys, paper notes, and tape measurements on a 50,000-sqft HOA produce numbers that vary by 15 to 30 percent between estimators. VerdaKai replaces that with imagery + AI segmentation + operator review, surfaces every measurement's provenance, and refuses to silently re-author values the operator has already corrected.
Imagery sources, by preference. Esri World Imagery (ArcGIS Online) is the operator-preferred default — sharper than Google's downsampled hybrid in residential SW Florida, native zoom up to z22 in major metros, no Google Maps key dependency on the rendering side. Google Maps Satellite is the fallback when Esri's regional capture is older than Google's for a specific lot. Nearmap's sub-meter aerial is the third option when the org has a Nearmap license configured; coverage and recency are surfaced inline in the basemap dropdown so the operator sees what they're working with before they trust the tile. The basemap dropdown itself sits inside the map and survives full-screen mode so it never disappears at the moment the operator wants to switch.
Parcel boundaries from authoritative public data. Sarasota, Manatee, and Charlotte county GIS endpoints serve real parcel polygons with lot square footage, owner of record, parcel ID, year built, living square footage, just-value, and zoning. When the public endpoint is down or the parcel is in a county VerdaKai hasn't onboarded, a hull fallback boxes the lot from the geocoded point so measurement still proceeds. PAO health is reported back to the client so the operator knows whether they're looking at a canonical parcel or a fallback envelope.
Zone segmentation. Claude Sonnet 4.6 multimodal vision drafts turf, ornamental bed, hardscape, and exclusion polygons from the aerial overview at z20 to z21 native zoom, plus crop fields, native buffers, and "tree canopy" zones on properties where those are material to scope. Operators can edit any zone vertex by dragging, re-draw zones from scratch with a polygon-or-rectangle picker, and toggle "Use manual zones" to force the totals to come from operator polygons instead of the AI draft. Manual zones use geodesic union math per zone type so overlapping polygons don't double-count.
Tree detection. DeepForest, a PyTorch neural network trained on NEON aerial RGB imagery, runs alongside the vision pass to detect and classify every palm, oak, shade tree, and ornamental tree on the lot. Each detection returns a pin position, a species class (treatable palms only — Bismarckia, queen, foxtail, pygmy date, Canary Island date, areca clusters, coconut, Christmas, solitaire, triangle, Washingtonia), and a confidence score. Per Genesis L&O contract scope, Sabal palmetto (cabbage palm — Florida-native protected species) and Roystonea regia (Cuban royal — separate arborist contract) are excluded from the treatable count automatically.
Pin refinement with SAM-2. Meta's Segment Anything 2 is wired behind a "Refine pins" drawer. The operator clicks a tree, SAM-2 returns a high-quality crown mask, and the system uses that mask to verify the pin position, trim the crown against rooflines and shadow, and reject pins that fall outside the parcel boundary. Refinement is fast enough to run interactively while the operator reviews the AI draft.
Per-zone source provenance. Every measurement carries an explicit source label — override (operator typed), measured (operator drew manual zones), ai (raw AI scan), or missing. Downstream surfaces — Quote Lab handoff, customer PDF, property page, freshness banner — render the canonical value alongside its provenance so admin can tell at a glance which numbers are operator-corrected and which are raw AI.
Effective-measurements lock-in. Operator-corrected measurements persist server-side in aimeasurements.effectivemeasurements. A re-loaded survey shows the corrected number, not the original AI scan. Re-running the AI doesn't silently overwrite the operator's work; it surfaces the new AI value next to the corrected one and asks before applying.
Multi-source exclusion pipeline (Detect AI Exclusions). When the operator scans a community / HOA, "Detect AI Exclusions" runs a five-source pipeline to produce the polygons that get deducted from billable turf. Each source contributes the polygon class it's actually authoritative for; nothing is asked to draw geometry it can't draw well. (1) Vision LLM (Claude) for identification only — counts of retention ponds, club facilities, common-area square footage; the polygon output isn't trusted because vision LLMs can't draw pixel-precise outlines. (2) Browser CV water detection — a canvas-API pixel classifier (low brightness with blue-channel dominance) flood- fills connected water components and traces contours via Moore- neighbour walk + Douglas-Peucker simplification. Runs entirely client-side because Cloudflare's edge runtime can't decode PNGs (no pngjs stream / zlib). (3) County GIS buildings — authoritative ArcGIS FeatureServer endpoints for Sarasota and Manatee; OSM building polygons used only as fallback for parcels outside the operator's territory. (4) OSM road buffers — Overpass API highway centerlines, buffered by tag-derived width (motorway 12 m through footway 1.5 m). (5) CV driveway connectors — hardscape pixels that sit BETWEEN a building footprint and a road buffer (within 6 m of both, NOT inside either). Driveways aren't in any free authoritative dataset; the topology constraint — must touch BOTH a building AND a road — is what makes CV reliable here, where bare hardscape detection would flood false positives. Every source's output is clipped to the parcel hull via Sutherland-Hodgman. Gross turf is the shoelace area of the parcel rings (deterministic; replaced an earlier non-deterministic Claude estimate that varied 0–180k sqft between scans of the same property).
Verbose measurement methodology, in app and in PDF. The collapsible "Measurement methodology" panel under the exclusion-zone list shows every step of the most recent Detect AI Exclusions run — timestamp, action, and a parameter dict listing the tool, upstream source, classifier rules, and result counts. The same trail ships in the customer-facing PDF as a "Measurement Methodology — Detail" section after the existing high-level Methodology table, so a customer asking "where did 92,300 sqft of turf come from?" sees the answer one click / page away. Persisted under communityresults.scanprovenance, hydrated on saved-community load, carried forward through the explicit Save flow.
Tree inventory with editable measurements. The inventory panel shows every detected tree with ID, type, species, DBH, height, volume, canopy, source, and lat/lng. DBH, height, and volume cells are inline number inputs — the operator records field-tape readings directly into each row, the source label flips to "Field" for operator-recorded values, and edits ride the existing autosave straight to the survey row. A dedicated Palm Inventory subsection above the general table filters palms only and shows a derived injection tier (<15 ft / 15-30 ft / 30-45 ft / 45+ ft) the moment a height is typed in — matching the height-tiered injection pricing the operator is going to send to the customer.
Recenter and full-screen. Every map renders a "Recenter" pill bottom-left that snaps back to the property's lat/lng at the original prop zoom — useful after the operator pans across the neighborhood while measuring. The map supports browser full-screen via the Fullscreen API (not in-page resizing) so the satellite tile fills the entire monitor while drawing zones; the basemap dropdown, zone tools, and recenter button render inside the full-screen container.
Save to Properties. When a scan is finished, one tap saves it to the fieldopsproperties table — creates a new property row when none exists at the address, updates the linked one when a property already exists, and writes the property id back to the survey so the property page's Apply Survey Measurements widget finds it next time. Closes the gap where scans never landed in the property list.
Recent surveys gallery. A horizontally scrolling gallery on the home view shows the most recent 50 surveys with thumbnail, address, created-at, confidence, and effective turf sqft. Featured "last property survey" tile pulls the most recent completed scan into a larger card with the same source-aware effective measurements the gallery uses. Recent typed addresses surface as one-tap pills under the search bar so operators don't retype properties they've recently scanned.
Imagery currency advisory. A pill near the Aerial dropdown notes that Google tiles are typically six months to three years old and links directly to Google Earth's historical imagery slider for verification. Nearmap and Esri sources show their own currency context when capture metadata is available.
Address search. Google Places autocomplete on the same API key the satellite map uses; the input progressively enhances and falls back to plain text when the script is unavailable.
HOA / community batch scanning. For multi-lot communities, the system batch-scans a sample of fifteen to twenty lots and extrapolates to the development total with sampling confidence reported alongside. A 332-lot community typically resolves in three to four minutes. Per-parcel results land in a sortable table, can be re-scanned individually, and roll up into the community-level totals. A per-street rollup table groups the parcels for crew routing and per-street pricing in Quote Lab.
Per-property breakdown PDF. When a community survey hands off to Quote Lab, the customer-facing PDF includes a per-property breakdown table — every selected lot in the HOA with its individual turf, ornamental, palm, oak, and tree counts, phase tags inline, sorted by phase → street → numeric address prefix so the table reads in route order. Unscanned lots stay in the table with "—" so the customer sees the full membership, not just the scanned subset.
A tech photographs an unknown plant, an active pest, or a discolored patch of turf. The image goes to Claude Sonnet 4.6 multimodal vision with one of two prompt configurations:
three distinguishing visual cues, one or two species it is not and why, and one sentence of care guidance grounded in the operator's real product catalog when one is loaded.
symptom description, primary diagnosis tied to the SW Florida monthly pressure profile, treatment recommendation drawn from the operator's stocked products, and a "what to check next visit" follow-up.
Treatment recommendations are constrained to products the company actually stocks; the AI cannot suggest chemistry the operator doesn't have. Field-verified corrections (operator says "this isn't that, it's X") prepend to every prompt as paaicorrections so the model can't repeat a fix the operator has already taught it. Admin-authored Knowledge Notes prepend likewise as company-specific context, making the model accountable to the business's playbook rather than generic best-practice copy.
Every response files itself into a topical knowledge base — Plant, Pest, Disease, or Diagnosis — that the next field-id session searches before the AI runs. A photo gallery per property shows every identification tagged to that lot.
Plant Brain unifies the operator's species catalog, pest library, disease references, and treatment recipes into one searchable federation layer. Every diagnosis route — Bubby, Field ID, AI Garden Chat, the analyze pipeline — federates through it: a search for "chinch" returns the relevant turf pest plus its preferred treatment classes; a search for "live oak" returns the species record alongside known pests and diseases for that host.
The catalog is keyed in lowercase underscore form (liveoak, southernchinch_bug) so AI prompts and operator-typed text resolve to the same canonical entity. Snake-case alias resolution runs as a fast first pass, then federated search across species + pests + diseases adapters, then fuzzy fallback. Each entry can carry: common names, botanical name, hosts, symptoms summary, treatment class hints, mode-of-action hints, application method, seasonal window, and whether the treatment is a Restricted Use Pesticide (RUP) requiring applicator certification.
The Plant Brain entry lives in the desktop sidebar and the mobile bottom tabs (Tools section, sparkles icon) and is open to every field role. The seed dataset is sourced from UF/IFAS and is extensible by admins from the Plant Brain admin surface. Each detail page shows hosts, treatment classes, related photos, and links to relevant recent scouting reports.
Plant Brain endpoints (/api/field-ops/plant-brain/*) accept either the in-app session (same-origin) or a FIELDOPSAPI_KEY bearer token, so KAIR's other tools (Program Architect, Logos, downstream agents) can query the same knowledge base without inheriting VerdaKai's session auth.
Bubby is the floating AI assistant pinned to the bottom-right of every authenticated screen. The button itself is mute — an emerald "Ask Bubby" pill — but the chat panel that opens when tapped is the densest AI surface in the product.
Five context streams fire in parallel on every chat turn, so Bubby's answers are grounded in what the operator is actually looking at:
resolves snake_case aliases, runs species + pests + diseases adapters in parallel, falls back to fuzzy species search.
property page, the property uuid is forwarded to the API and the route fetches scope (turf type, sqft, ornamental sqft, palm count), recent scouting reports, last application logs, and the latest soil test (pH, OM, CEC, Na with high-Na flag). The system prompt then references this lot instead of generic SW Florida advice.
a registry of route patterns. Bubby gets a tailored snapshot for /field-ops/properties (recent properties), /field-ops/quotes (recent quotes), /field-ops/site-survey/<uuid> (that survey's measurements + tree count + AI notes), /field-ops/scouting (recent scouting reports), /field-ops/route (today's stops), /field-ops/soil-test (recent tests), /field-ops/programs (treatment programs + product mix), /field-ops/products (catalog), /field-ops/plant-brain/pests/<id> (pest detail), /field-ops/plant-brain/diseases/<id> (disease detail), /field-ops/plant-brain/treatments/<id> (treatment detail), /field-ops/species/<id> (species detail), /field-ops/help (help index nudge), and /field-ops/field-id (workflow nudge).
paaicorrections,prepended as a "never repeat these mistakes" block.
bubby-floating tool, prepended as institutional memory.
Genesis house style is baked into every answer — commercial and residential pricing rules, agronomic preferences (no calendar pre-emergent in SW FL, no Cuban Royal as native, real-time diagnosis instead of fixed scripts), and communication rules (commas not em dashes, "proposed timeline" not "guaranteed", no per-home/month, disciplined tone). Sourced from CLAUDE.md and maintained alongside the rest of the operator's standing rules.
App support mode. When the question is about the app itself — something not loading, an error, sign-in trouble, photo upload failing, slow performance — Bubby shifts roles. It gathers the page, role, browser, exact error text; walks through quick fixes (hard refresh, sign out and back in, different browser, retry on cellular); and when the quick fixes don't solve it, emits a structured "TECH SUPPORT NOTE — VerdaKai" the operator can read or text to the maintainer verbatim. No more guessing what's broken from a one-line text message.
Site Survey expert. Bubby knows the survey suite end-to-end — the geocode → vision → DeepForest → SAM-2 → manual override flow, the basemap dropdown trade-offs, the recenter and refine-pins controls, NDVI's resolution caveats, the confidence and reliability gates, and the five most common operator questions with the right answer. The operator on a survey page can ask "why is the Esri tile gray?" and get the actual answer ("coverage runs out past z22 in residential areas — switch basemaps or back off zoom") instead of a generic "try refreshing."
Photo diagnosis inline. Camera and file-picker buttons sit next to the textarea. Snap a leaf, a pest, a discolored patch — the photo client-side downscales to 1024px max, posts to the same ai-diagnose route Field ID uses, and the diagnosis lands as the next assistant message. A violet "photo in scope" chip surfaces above the composer; follow-up text questions stay on the diagnose route so Claude keeps reading the image.
Thread persistence. Conversations now survive widget close and tab refresh, keyed per page-context (one thread per property, one shared thread for the site-survey workspace, one for quotes, one global). 60-message cap, 7-day expiry, in-header Clear button. Photo data URLs are intentionally not persisted — too large for localStorage and the photo's diagnose conversation doesn't make sense without the image still in scope. A "Continued from earlier" hint appears at the top of restored threads so the operator isn't surprised by messages they didn't just type.
Open to every role. Field tech, maintenance, maintenance admin, and admin all get the same Bubby — same context streams, same photo path, same support mode. The only thing role-gated is the admin sidebar entry; the floating widget appears on every authenticated screen.
Each scheduled visit renders as a single Stop Card containing the property's program, the pre-built mix from the program's monthly schedule, the tech's applicator license, and the current field-conditions notes. Closing the visit:
application_log row per applied product, attached tothe tech's license number and an applicator-name field;
on the card;
dashboard updates in real time;
customer.
End-of-day data entry is eliminated; the stop card is the authoritative source of truth. Mutations enqueue locally when the truck loses signal and replay on reconnect, so a tech in a dead zone never loses a closed-loop log.
Quote Lab supports residential and commercial pricing modes, a configurable number of visits per year, per-service-line cost and profit tracking with target margins, and three customer-facing exports: a print-ready estimate PDF, a signable browser-rendered estimate, and a copy-paste estimate text block.
Margins driven by a real cost engine. Quote Lab in VerdaKai mirrors Quote Lab in Program Designer. PA's PACalc is the master estimate engine; VerdaKai ports it into a TypeScript twin (src/lib/pa-cost-engine.ts) that consumes the same per-service cost configurations. Per-category margin tags use real cost-of-goods, painted in severity colors (ok, warn, bad) relative to PA's target margins (turf 45%, ornamental 50%, palm 55%, specials 60%) with PA's tolerance bands (1 percentage point off target → warn, 7 points off → bad). ZEROCOST and NEGATIVE_MARGIN flags surface inline when a category has revenue but no cost config in PA.
Customer signature pad with persistence. The signature canvas captures pointer-captured strokes (drag past the edge no longer breaks the drawing state), guards against an empty save, and persists into surveytrace.customeracceptance with signature data URL, signed-at timestamp, signed-by name, signed-for address, accepted total, and accepted annual. Admin sees an emerald "Customer accepted" banner with the signature thumbnail on each saved quote in /field-ops/quotes.
Apply Survey Measurements widget. When a property's manual record diverges from the linked site survey's aimeasurements (>1% drift or >2 units), an amber panel appears on the property page showing Manual vs. Survey side-by-side for turf, ornamental, and palms. Per-field "Apply" buttons or a single "Apply all survey measurements" button copy the survey numbers into fieldops_properties so quotes, schedules, and pricing stop drifting from the actual scan.
Quote freshness banner. Each quote's underlying site survey or property record is checked for updates more than an hour after the quote was generated. Stale quotes show an amber banner with the hours-stale count and an "Open property" jump so the operator knows to re-run pricing before sending.
Read-time soil test merge. The property page Soil Tests panel pulls pasoiltests rows fuzzy-matched on customer and address, renders them under "Legacy (Program Architect)" with pH, OM, CEC, and Na flagged red over 100 ppm. Read-only; no PA-side mutations.
Site Survey handoff. A site survey result hands off to Quote Lab via a server-side handoff record (72-hour TTL) accessed by URL, not session storage, so the link is durable across devices and tabs. The handoff carries the effective measurements, customer info, the source-certainty profile, and a survey_trace snapshot saved to the resulting quote for later audit. The Send to Quote Lab action does not enforce hard verification gates — source-certainty, pre-flight checklist, packet-stale, and heavy-canopy advisories are all surfaced as banners but do not disable the action; operators decide when a scan is ready to quote and the system records the decision rather than blocking it.
Estimate PDF styled to match Program Designer. /api/estimate-pdf mirrors public/pa/customer-estimate.html exactly — emerald palette (--green: #059669), white card, 3px green underline header, emerald-on-white pricing accent. The VerdaKai estimate and the PA customer estimate read as the same document.
Customer-facing community PDFs are the most data-dense deliverable in the product. The structure deliberately reads as a property dossier rather than a quote — every section answers a specific question an HOA board member, a property manager, or a sales prospect would ask before approving the program.
Property Composition card. Three-card visual splitting the lot's square footage across treated turf vs. ornamental beds vs. non-treatment exclusions, with percentages. The polygon-level zone segmentation surfaces as a customer-readable composition.
Methodology & Data Sources table. Names the actual production stack — Esri World Imagery for the aerial layer, Claude Sonnet 4.6 multimodal vision for zone segmentation, DeepForest neural network (NEON aerial RGB training set) for tree detection, SAM-2 for pin refinement, public county GIS for parcel boundaries. Includes an explicit operator-review callout noting that any AI-derived value affecting chemistry rates or contract scope is reviewed on-site.
Service Calendar. Month-by-month agronomic focus reflecting SW Florida timing — pre-emergent windows, chinch bug peak, palm Mn/K drench timing, large patch onset, soil-temp triggers. Renders as a 12-visit/yr calendar by default with 8-visit and 4-visit fallback when fewer visits are selected. Liability disclaimer auto-attaches below the 8-visit minimum.
What's Included Each Visit. Plain-English breakdown of what every visit covers — walk-the-property inspection by a certified horticulturalist, scouting-driven chemistry not a fixed script, slow-release nutrition calibrated to soil temperature, palm-specific care when palms are in scope. Outcome language; no equipment-specific commitments that would create contractual exposure if a visit can't deliver.
Regional Risk Profile. Sarasota / Manatee / Charlotte / Lee specific pressure list. Always includes chinch bug pressure on St. Augustine and Large Patch fungal pressure October–April. Conditionally adds palm-specific risks (Lethal Bronzing, Ganoderma, Pestalotiopsis, K and Mn deficiency on alkaline coastal soils) when palms are in scope, oak protection when oaks are in scope, saltwater intrusion for coastal counties, and the active SWFWMD watering restrictions so the timing decisions read as informed.
Per-Property Breakdown. Every selected lot in the HOA rendered with its individual turf / ornamental / palm / oak / tree counts, phase tags inline, sorted by phase → street → numeric address prefix. Unscanned lots show "—" so the membership table stays complete instead of just the scanned subset.
Pricing Breakdown. Service-level pricing with billable square footage, rate per 1,000 square feet (commercial rates for HOA work), per-visit cost, visits per year, and annual total. Exclusion savings line surfaces when the lot has retention ponds, common areas, or other non-treatment zones removed from the billable turf.
The PDF imagery comes from Esri (sharper, no Google Maps key dependency on the rendering side). The customer-facing copy follows a strict no-equipment-promises rule: outcome and strategy language only, no specific brand names, no specific staff names, no specific cadence promises, no resolution claims. Liability exposure is intentionally minimized.
FDACS applicator licenses, Certificates of Insurance, W-9s, workers-comp policies, business licenses, and continuing-education units are tracked per crew member. Three printable PDF reports — Maintenance Crew Compliance, Spray Crew Compliance with FDACS license numbers, and Combined Compliance with an AI-generated posture summary — are produced from the same data. Expiring licenses surface day-counters on the dashboard; CEU shortfalls appear as progress bars on the crew detail page.
A Compliance Center surface aggregates the same data into a single view with filters by license type, expiry window, and crew member. HOA boards request the Combined Compliance PDF often enough that the dashboard surfaces a one-click download next to the crew member's name.
Audit log with AI summarization. Authentication events (signin, signup, rate-limit, admin action, IP block) are recorded with IP geolocation and user-agent metadata to an audit log behind an admin-only tab. A 0–100 risk score auto-loads on every dashboard view; natural-language queries against the log return cited answers via Claude. A weekly digest summarizes the prior seven days and emails to the admin.
Hard guardrails layered under the AI. IP auto-block at a tunable failure threshold (default 40 in 5 minutes for a 60-minute block), account lockout at 5 failures in 15 minutes, impossible- travel detection (alert at implied speeds above 900 km/h), and persistent rate limits surviving server restarts. A 90-day retention cron purges old events nightly. Thresholds are admin-editable from Security → Settings without redeployment.
Access Control matrix. Maps modules to roles with per-user overrides, time-bounded role elevation, and an AI advisor that suggests grants based on the user's role, title, and 30-day activity. Drift reports flag granted modules unused in the last 30 days. Permission snapshots export as CSV for board review.
Invite-only elevation. Admin and Maintenance Admin are invite-only; public signup is constrained to Spray Tech and Maintenance Crew. Admin invites carry an explicit role and a 72-hour expiry token; URL-typing is not a route to elevated content because every server-side gate re-checks the resolved role against the access matrix.
The Education Center tracks per-topic accuracy across pesticide safety, L&O, pest ID, turf chemistry, and ornamental. Topics falling below 70% accuracy escalate to amber on the dashboard; the AI Coach opens as a one-on-one tutor that drills weak topics with Florida-specific regulatory context (FDACS rules, restricted- use materials, Worker Protection Standard, the Manatee County fertilizer blackout window June 1 through September 30, the SWFWMD modified-phase irrigation rules).
The Coach is grounded in the same Plant Brain catalog the diagnosis routes use, so an answer to "what is large patch" cites the same pathogen record the operator's scouting report would attach to the property file.
Every Program Designer Suite feature has a corresponding native VerdaKai mobile surface. PDS itself is gated off mobile (a client-side guard redirects phone visits to /field-ops); on the phone, the admin sees:
fully editable from the phone keyboard.
rate, timing, and app-support questions the desktop assistants handle.
notes, and a tile gallery filterable by property.
editors — full read/write parity with the desktop versions.
receive with line items from the Materials catalog), Operations (live cash pulse with YTD revenue / cost / profit), Estimate History (sent quote browser with outcomes), Soil Test Interpreter (PDF upload + Claude Sonnet 4.6 PDF parsing of pH, OM, CEC, NPK, micronutrients, salt indicators, with Sarasota- specific interpretation and per-property linking), Label Center (pesticide label reference), and Trends & Insights (year-over- year + cohort analytics).
Every interactive control is sized for at least a 44px tap target. Every client-only API (camera, clipboard, storage) has a graceful fallback so the application functions on a mobile browser over LAN HTTP as well as on Vercel over HTTPS. The application installs as a PWA on iOS and Android.
The Site Survey single-property view surfaces operator-meta in real time: handoff readiness score, source certainty per zone, draft auto-save status (Saving / Saved · Xs ago / Retrying / Save failed / Offline · queued), zone-overlap dedup (geodesic union per type so overlapping AI polygons don't double-count), and a manual override row for every measurement including individual tree-type counts (palms, oaks, shade, ornamental).
A sticky operator panel hosts the Measure Controls bar with a collapsible details pane covering pre-flight checklist, verify mission status, eco recommendations with explainability, timeline history with two-snapshot diff picker, ops telemetry, and operator metrics. The panel collapses by default to keep the satellite map on screen.
Eco intelligence and QA dashboard. The Site Survey QA dashboard at /field-ops/site-survey/qa-dashboard aggregates recent funnel events, flags data-sanity anomalies (lot counts, zero net turf with selections, missing scan type on large selections), and renders three persisted analytics surfaces:
pest-control intensity recommendations with the reason codes that drove the assignment, rendered as plain prose.
picker that produces labeled before/after diffs (readiness, unresolved risks, effective measurements per zone, recommendation hash).
risk count, and escalation frequency aggregated across recent surveys with bar visualizations.
Each completed scan exposes Download PDF, Email PDF, and Save to Drive actions in a dedicated panel below the Send to Quote Lab action grid. The buttons fetch the PDF as a Blob via JS so a failure surfaces inline (with a tailored "tap Save Survey first" hint when the row hasn't been saved yet) instead of dumping the operator into a 404 page in a new tab. Save to Drive triggers the existing PDF download and opens drive.google.com in a new tab; direct OAuth-based Drive upload is a planned follow-up.
A customer-facing read-only link is HMAC-signed with a 7-day expiry and renders a printable HTML report at /field-ops/site-survey/public-report.
Every AI-derived value on screen names the specific model that produced it and an honest accuracy band. The reusable AiSourceBadge component surfaces:
aerial RGB) — counts and locates individual trees from satellite imagery, ~85–92% recall on isolated crowns, drops in dense canopy.
segmentation) — refines tree-pin masks and zone boundaries from a click or box prompt, high mask quality on well-defined edges.
identifies plant species and diagnoses pest/disease symptoms, strong on common SW Florida species.
salt indicators from soil test PDFs.
recommendations, and customer-facing language.
vegetation greenness, 10m pixel resolution, 5-day revisit, cloud cover gaps.
service cost-of-goods from configured rates per 1,000 sqft / per palm.
Tap the badge to see the long form: full model name, what it does on this specific value, the accuracy band, and the standing "Not 100% accurate. Verify before treating, billing, or signing" caveat. The principle applies equally to operator-facing surfaces (so the tech knows what to re-check) and customer-facing surfaces (so the customer sees the methodology behind the measurements).
18.
Storage), Vercel Fluid Compute deployment.
zone segmentation, soil-test parsing) and text (coaching, summary generation, reply drafting, agronomic narrative, security risk scorer). Field-verified corrections and Knowledge Notes are prompt-injected as institutional memory.
training set) running as a worker outside the Vercel runtime, reachable via DEEPFORESTSERVICEURL.
promptable mask service.
Maps JS for parcel overlays and Street View, Nearmap (when configured) for sub-meter aerials, Google Places for address autocomplete, county GIS endpoints for parcel polygons.
HTML/JavaScript SPA sharing state with VerdaKai through Supabase. PDS runs the program designer, route schedule, master price list, and HR-adjacent employee management. The PA Cost Engine inside PDS is the master estimate engine; VerdaKai's TypeScript port mirrors it.
Data residency. Single Supabase organization (KaiRos org slug genesis) with row-level security enabled on every table from day one. Service-role key bypasses RLS for the server-side API proxy only and is never exposed to the client. Multi-tenant separation inside the org is by org_id foreign key on every table; queries filter by the operator's resolved org at the route layer.
Realtime sync. Supabase Realtime streams Quote Lab edits, program changes, schedule mutations, and chat messages. PDS reads the same channels so a quote saved on a phone surfaces instantly on the desktop dashboard.
Offline resilience. Mutations enqueue locally on signal loss (stop-card closes, scout reports, quote saves) and replay on reconnect. The autosave indicator surfaces queue state explicitly.
VerdaKai writes customer-facing copy under a strict liability rule. Customer-readable surfaces (PDFs, customer-estimate page, proposal language, mailbox templates, public report) describe what the program achieves, not the specific tools used to get there.
The rule applies as concrete prohibitions:
in customer copy. By naming a specific piece of equipment, the business commits to using that equipment on every visit; if a visit can't deliver it, the customer has a written commitment to point at.
on request") that an HOA board could reasonably expect to be produced on demand.
cadences ("Richard Winn, FNGLA Certified Horticulture Professional inspects monthly"). Staff turnover is normal; cadence claims are a contract.
(Prodiamine, bifenthrin, azoxystrobin, oxytetracycline) in customer copy. Resistance management routinely rotates active ingredients; locking in a brand is a corner the program shouldn't paint itself into.
that a board could test against. Imagery providers vary resolution by region and capture date.
Internal crew packets, audit logs, and operator-only surfaces can reference equipment, brands, and cadences freely — they aren't contractual.
| Role | Surface |
|---|---|
| Admin | Full surface including AI memory, mailbox, dispatch, fleet |
| Spray Tech | Today, Stop Cards, Field ID, Education, Quote Lab, Materials |
| Maintenance Crew | Today, Stop Cards, Field ID, Education |
| Maintenance Admin | Mow-crew roster, scoped Site Survey, maintenance compliance PDF |
Role gates are enforced server-side; URL-typing is not a route to elevated content. Spray pricing, AI memory, and the spray-crew roster are admin-only. Bubby is open to every role with the same context streams.
VerdaKai is built and maintained by Kai'Ros International Technologies (KAIR). All revenue funds the Kai'Ros International orphanage in Kumasi, Ghana. Every line of code serves something larger than the product.
Reference deployment: verdakai-sigma.vercel.app. Source: private repository under the kairos-international-tech GitHub organization. Technical reference: Operator's Manual. Contact: rick@kairosinternationaltechnologies.com.