# Chaparral Software > Chaparral Software is an independent AI diagnostics and consulting firm founded in 1986 in Agoura Hills, California. The principal is Russ Kohn, a Systems Architect with 40 years building production systems for organizations from auctioneers to zoologists, now helping people and organizations find ground truth in their AI implementations. - URL: https://chapsoft.com - Principal: Russ Kohn - Founded: 1986 - Location: Agoura Hills, California ## Ground Truth Assessment - URL: https://chapsoft.com/ground-truth Only 5–12% of organizations achieve significant financial impact from AI (McKinsey 2025 n=1,993; BCG 2025 n=1,250; PwC 2026 n=4,454). The difference isn't the technology -- it's whether anyone independently assessed the problem before committing resources. Organizations that define clear success metrics before starting generate 2.1x more ROI (BCG 2025 n=1,803). The ones that skip structured assessment abandon 42% of their initiatives (S&P Global 2025 n=1,006). A Ground Truth Assessment is an independent technical diagnostic that tells you what's actually true about your technology -- not what someone is selling you. The audience is non-technical decision makers: founders evaluating vendor proposals, executives with AI projects that aren't delivering, and PE/VC partners needing technical assessment mid-deal. The assessment uses a two-stage model. Stage 1 is Discovery: a structured conversation that converts a buyer's situation -- half-formed requirements, a vendor pitch deck, a CTO who talks past them -- into something that can be rigorously assessed. This draws on 40 years of architecture and consulting experience. Stage 2 is Assessment: the structured requirements run through a analysis and verification pipeline that produces five deliverables: 1. **Claim Register** -- Every testable assertion in your documents, extracted, catalogued, and verified against evidence. Not opinions. Claims, tested. 2. **Landscape Scan** -- Rapid survey of the relevant technical territory, surfacing what your documents don't know they're missing. 3. **Gap Map** -- Systematic identification of technical, legal, operational, data readiness, and scope gaps, positioned and structured, not just listed. Includes an honest assessment of whether the data foundation can support what's being proposed. 4. **Campaign Plan** -- Sequenced, gated action plan with measurable success criteria and explicit kill points. Not aspirational recommendations -- operational plans with milestones, defined outcomes, and the discipline to stop if the evidence says stop. 5. **Velocity Check** -- Data-driven timeline calibrated to how software actually gets built now, including AI-paired development. Are the speed claims realistic? ### Sample Findings — Three Assessments **Enterprise platform (three failed launches).** A specialty consumer products company hired an external development team to rebuild their platform. After three failed go-live attempts, independent assessment found: a pricing vulnerability where tiered pricing logic existed only in frontend JavaScript (backend accepted any submitted price); 41 phantom API endpoints — AI-generated frontend service calls to backend routes that did not exist, while 8+ real endpoints went unwired; a 22-point quality gap between backend (63%) and frontend (41%); and zero test files in either repository. The three failed launches were traced to a systemic cause: no verification gate between development and production. **Civic org document portal (architectural impossibility).** A civic organization needed a public document portal. A developer delivered a 482-line handoff spec estimating 12 days to build. Assessment found: an architectural impossibility (the spec called for Python tools running in Cloudflare Workers, which execute in a V8 isolate and cannot run Python); two unmentioned compliance gaps (WCAG 2.1 AA, California Public Records Act); a 5x document count discrepancy (spec stated ~400, actual count was 2,069 across 4 parsing patterns); and insufficient admin security. The project as scoped was roughly half of what was actually needed. **Vibe-coded business tool (5 field failures).** A service business owner used AI coding assistants to build a pricing calculator and proposal generator — 600 lines, single HTML file, no developer involved. Assessment verified 28 claims and found 5 field-readiness failures: a custom service frequency bug producing incorrect totals, iOS popup blocking on the primary field device (tablets), no business terms on proposals, no commercial client mode, and no proposal history. The tool was fundamentally sound (8/28 claims failed) but would have broken in front of customers. Five of seven recommended fixes were AI-assistable. Total elapsed time: one conversation over coffee. ## Dispatches -- Shifting Ground - URL: https://chapsoft.com/dispatches - All dispatches available as clean markdown at /dispatches/{slug}.md (e.g. https://chapsoft.com/dispatches/fog.md) Shifting Ground is a publication of dispatches from the AI transformation -- field reports from a 40-year software veteran walking the same terrain as the reader. The dispatches are honest, specific, and make no guru promises. They cover AI career disruption, production AI realities, and the gap between hype and ground truth. Published 3-4 times per month, plus cairns (short technical trail markers). Every dispatch goes through a 5-stage editorial pipeline (Sift, Forge, Thrash, Rattle, Lint) where AI assists with structure and pacing, and the voice is human. ### Why AI Projects Fail - URL: https://chapsoft.com/dispatches/why-ai-projects-fail - Published: April 6, 2026 What the evidence says when you actually trace the citations. The author ran a formal assessment of 50+ published studies on AI project failure using the same structured pipeline built for client work. An adversarial review process caught weak statistics in his own site draft before publication -- the 80% failure rate traces to a 2018 Gartner forecast, not RAND's research; the 55% regret figure links to a blog, not Forrester's actual report. The process worked, but the resistance to questioning numbers that felt authoritative is a universal calibration problem, not a personal failing. A 2025 scoping review (working paper) concluded none of the major studies use probability sampling or standardized definitions. What IS defensible: only 5-12% of organizations achieve significant enterprise financial impact from AI (McKinsey, BCG, PwC convergent). 90% of 6,000 executives across four countries reported no measurable AI impact (NBER, Bloom et al.). 70% of AI implementation challenges are organizational (BCG, practice-derived). Companies acted on AI's potential, not its evidence: NYC WARN Act filings show zero of 160+ companies checked the AI box despite public claims; Oxford Economics called the AI layoff narrative "convenient corporate fiction." A preregistered meta-analysis of 106 studies (Nature Human Behaviour) found human-AI team performance depends on task type and integration design, not human presence alone. The dispatch identifies three layers of verification most organizations skip (goals, requirements, implementation) and argues the discipline of structured assessment -- combined with better models -- is what separates the 5% that succeed. Includes a claims register tracing every factual claim to its primary source. ### Fog - URL: https://chapsoft.com/dispatches/fog - Published: April 2, 2026 A meditation on navigating professional uncertainty in the age of AI. From punch cards to frontier models in three short years -- prompt engineering to context engineering to agentic orchestration -- we're moving faster than our senses can apprehend. The dispatch uses the metaphor of driving in fog with headlights that only reflect a shallow bubble, exploring how even those engrossed in the technology are confronted with the inevitability of dislocating change. It concludes that when the instruments fail, you navigate by the things that don't change: connections, judgment, and the willingness to walk into the fog and report back. ### The Garden Timer - URL: https://chapsoft.com/dispatches/the-garden-timer - Published: March 31, 2026 Starting from a broken garden timer and a helpdesk queue, this dispatch traces a line from consumer frustration to existential career questions. The central metaphor is a capacitor -- the hidden state that holds charge after you think you've turned everything off. How many systems, careers, and assumptions about personal value are holding onto charge from a world that's already been unplugged? The tools democratized the visible part of the craft. The dispatch asks: where is my value, what is my identity, how will I continue to pay the bills? It ends with a refusal to wait inside: "Hello future. I'm Russ. Let's dance." ### The Twenty Percent - URL: https://chapsoft.com/dispatches/the-twenty-percent - Published: April 1, 2026 AI tools democratized the first 80% of building software -- the working demo, the responsive layout, the form that submits. The remaining 20% is invisible: edge cases, race conditions, compliance requirements, data models that collapse at scale, business models with unrealistic assumptions. Only 5–12% of organizations achieve significant financial impact from AI (McKinsey, BCG, PwC convergent). The 20% is usually why. The dispatch introduces the Greek concept of phronesis (practical wisdom) -- the embodied knowledge that accumulates through decades of watching things fail. Every mid-career professional is about to have the vertigo moment when someone with less experience produces output that looks indistinguishable from theirs. The answer: your job was never making things; it was knowing which things will stand and which will fall. ### The Moat Is the Harness - URL: https://chapsoft.com/dispatches/the-moat-is-the-harness - Published: April 4, 2026 An analysis of the Claude Code source leak (512,000 lines accidentally shipped in an npm package, March 31, 2026). While 47 public analyses asked "what's inside?", this dispatch asks the practitioner's question: what does this mean for people building tools for AI agents? It identifies five architectural pillars (prompt-as-protocol, streaming generators, fail-closed defaults, file-based state, layered context) and argues the moat isn't the model -- it's the harness. The code also reveals references to KAIROS (an autonomous daemon mode) and Mythos (a next-generation model tier), pointing toward a phase transition where the model itself becomes the agent and the harness transforms from execution environment to oversight layer. Better execution demands better oversight, not less. ## Cairns - URL: https://chapsoft.com/cairns Cairns are trail markers -- short technical notes, lessons learned, and observations from building production AI. Named after the stacked stones that mark safe passage on mountain trails, they are available via RSS. They are not emailed; the dispatches are the subscription product. Cairns are the breadcrumb trail you find when you're searching. - **Perpetual tomorrow** (2026-04-15) -- Tests that get skipped are always skipped. Code that has a comment "defer" will always be deferred. Tomorrow never comes. Express sequence, not time -- dependency is timeless. - **Stage names are not stage contracts** (2026-04-12) -- When a process moves to a new domain, the stage names may stay the same while the behaviors diverge. Stop inferring from names. Re-state the stage contract, scope, allowed actions, and expected output. High-level resemblance is the danger, not the safety. - **Code is increasingly ephemeral** (2026-04-10) -- When the cost of build + test is close to the cost of procure + integrate, it is almost always better to build. But the economics only work if you include the test. Build alone is vibe coding. Build + test is engineering. - **Context in the window is not attention in focus** (2026-04-08) -- A model's competence in one domain doesn't transfer to the next -- but it all too often feels like it does. Fight the temptation toward complacency. Never assume the model knows what you haven't explicitly had it remember, or find. - **Forty-seven pundits and nobody asked our question** (2026-04-04) -- We surveyed 47 public analyses of the Claude Code leak. Every one asked "What's inside?" We asked: "What does Claude Code tell us about building better tools for agents?" The pundits read the map. The wayfinder reads the ground. - **Don't infer what you can detect** (2026-04-04) -- A regex fires every time, costs zero tokens, has zero hallucination risk, and is trivially debuggable. Use deterministic methods for what can be determined deterministically. Save inference for what requires judgment. - **Response budgets: be a good citizen in someone else's context window** (2026-04-04) -- MCP server tool responses land inside someone else's context window. Cap responses. Include a truncated flag and total_count. Budget your output like a conference talk. - **Metadata now, enforcement later** (2026-04-04) -- The cost of an unused field is zero. The cost of retrofitting a field into a running system is a migration. Add metadata for autonomous agents now, enforce later. - **Self-correction is not verification** (2026-04-04) -- Self-correction catches tactical errors. Verification catches strategic errors. Better execution demands better oversight, not less. - **The introspection loop** (2026-04-04) -- Writing prompts for a model that doesn't exist yet. You're writing letters to a future colleague and hoping they'll be kind. - **Don't vibe code the orchestration layer** (2026-04-03) -- When setting up an agentic orchestration workflow to overcome the limitations of vibe coding, don't vibe code the orchestration layer. - **Schema.org @id references across pages** (2026-04-03) -- Define a Person entity with @id on your homepage, reference it by @id on every other page. AI knowledge graphs use this to deduplicate. - **CSS class names as documentation** (2026-04-03) -- Naming a class .geo-quotable instead of .highlight-box tells developers what the element is FOR and makes the design system self-documenting. - **Variable fonts: one file, every weight** (2026-04-03) -- Source Serif 4 as a variable font is one 417KB WOFF2 file covering weights 200-900. The non-variable alternative would be 6+ files over 1MB. ## About Russ Kohn - URL: https://chapsoft.com/about Russ Kohn came out of UCLA with a chemistry degree and a computing habit. He has been building software since punch cards were a recent memory and acoustic modems were cutting edge -- databases, web services, mobile, cloud -- every wave of technology disruption, building through it, figuring out what worked by watching what didn't. For the past three years, he has been building production AI systems: knowledge graphs, orchestration pipelines, verification frameworks, and the tooling to run them. Not demos or prototypes -- systems that run, that are tested, that handle edge cases. His core capability is cross-domain pattern recognition combined with production systems engineering. His value is not "I can build things you can't" but "I know which things will stand and which will fall, because I've watched hundreds of beautiful things collapse when they hit reality." He is also navigating his own AI-driven career transformation and writes about the experience publicly through the Shifting Ground publication. ## Chaparral Software -- Company Founded in 1986 in Agoura Hills, California. The name comes from the chaparral -- the tough, drought-adapted scrubland of Southern California. Plants that survive fire, come back stronger, and hold the hillside together while everything around them shifts. Forty years building production systems for businesses of every size -- from independent shops like Dawson's Book Shop and A.N. Abell Auction Company to global brands like Coca-Cola, William Morris Agency, and Blue Cross of California. The longest engagement is 24 years and counting at FantaSea Yachts. Databases, APIs, cloud infrastructure, and now AI -- each era building on the last. A technical article on AI token management reached 52,000 views and a #1 Google ranking -- not from marketing, but from production experience written down. Custom systems, architecture, and independent AI assessments. The common thread: data that has to be right in production. ## The Trail So Far - URL: https://chapsoft.com/the-trail-so-far The arc of Chaparral Software from 1986 to production AI. Four decades of building production systems, each era building on the last. ## Selected Clients - URL: https://chapsoft.com/selected-clients ### Consumer & Retail A.N. Abell Auction Company, AAMCO Transmissions, Adidas, Dawson's Book Shop, FantaSea Yachts, Kal Kan Pet Care, The Coca-Cola Company ### Education & Research California State University Fullerton, UCLA Jonsson Cancer Center Foundation, UCSF, USC ### Entertainment & Media American Recordings, Anohana Production Management, Breakdown Services, Imaginary Forces, LA/NY Music, Lind Data, MCA Music Entertainment Group, Paul Vitello Productions, Performing Tree, The WB Television Network, Walt Disney Studios, William Morris Agency ### Healthcare & Life Sciences Amgen, Blue Cross of California, Fractal Medical Solutions, House Ear Institute, National Medical Review Office, Regenix, Target Therapeutics ### Manufacturing & Defense GlennDee/MGI, Lockheed Martin ### Professional Services Arthur D. Little, Smart Corporation ### Sports & Nonprofit Aggressive Skaters Association, Los Angeles Opera, Opportunities & Services for Seniors, US Olympic Wrestling Team ### Technology Belkin, FileMaker, Inc. / Claris, SD Media ## The Arc -- Capability History The thread through 40 years: complex data, real constraints, systems that have to work in production. - **1986-2000: Databases** -- Custom database solutions. FileMaker Pro and Omnis when application platforms were the serious tools for serious data problems. Sybase, MySQL, and Postgres when relational databases were the backbone of everything. - **2000-2015: APIs & Web Services** -- RESTful architectures, Python, Node.js. The shift from desktop applications to connected systems. Integration work across healthcare, entertainment, and education. FileMaker stayed in the mix throughout. Designed, built, marketed, and supported two self-funded commercial utilities for the FileMaker developer community -- Brushfire and EZxslt. Both were popular and critically acclaimed (reviewed in Macworld, MacTech, and ISO FileMaker Magazine), with thousands of users each. - **2015-2023: Cloud** -- AWS RDS, DynamoDB, API Gateway. Enterprise-scale systems for regulated and complex environments. FileMaker work continued in parallel for long-running clients. - **2023-Present: Production AI** -- Knowledge graphs, orchestration pipelines, verification frameworks, and custom MCP tooling -- a production platform used daily for client assessments and internal operations, plus a public MCP server at chapsoft.com for AI-agent discovery. Not demos -- systems that run, that are tested, that handle edge cases. Each era built on the one before it. Forty years of compounding technical judgment. ## FileMaker + AI Many of Chaparral's longest client relationships started with custom FileMaker solutions -- systems running production workloads for 10, 15, sometimes 20 years. Russ Kohn has been a frequent speaker at FileMaker developer conferences, and in 2023 presented to Claris on the AI implications for the platform. Deep FileMaker API integration experience informs his approach to leveraging FileMaker as a persistence layer backing inference solutions. In 2023 he published a case study on using a custom FileMaker solution to mediate GPT-4 conversations and structure prompt pipelines for manufacturing schema design. ## Publications - URL: https://chapsoft.com/the-trail-so-far Four articles published between January and April 2023, early in the AI wave: - **"Mastering Token Limits and Memory in ChatGPT and other Large Language Models"** (March 2023, Medium) -- 52,000 lifetime views, 28,000 reads. The #1 article on token limits through most of 2023 and 2024. A working engineer's explanation of a real problem, written from production experience. - **"Mastering Token Costs in ChatGPT and other Large Language Models"** (April 2023, Medium) -- The economics of token usage. Companion to the above. - **"The AI Revolution: Leveraging Skills and Expertise for Real Value"** (January 2023, Medium/Bootcamp) -- AI amplifies existing expertise, doesn't replace it. Contrarian in January 2023; consensus now. - **"GPT4 in Action: Streamlining Schema Development"** (March 2023, LinkedIn) -- Case study using a custom FileMaker solution to mediate AI conversations and structure prompt pipelines for manufacturing schema design. ## How This Site Is Built - URL: https://chapsoft.com/styleguide This site is designed from the ground up to serve four audiences simultaneously: humans reading on screens, search engines crawling for rankings, AI answer engines synthesizing citations, and AI agents consuming structured data programmatically. Every HTML element, CSS class name, structured data block, and content decision reflects this four-audience philosophy. For search engines (SEO): semantic HTML, strict heading hierarchy, Schema.org JSON-LD on every page (Person, ProfessionalService, BlogPosting, TechArticle), meta descriptions, Open Graph tags, static HTML that needs no JavaScript rendering, and a sitemap with lastmod freshness signals. For generative engines (GEO): quotable statistics with sources (40% citation lift per GEO research), answer-first paragraphs, explicit audience naming, structured elements that AI summarizers extract first. For answer engines (AEO): "who this is for" signals, forward pointers between content creating a crawlable content graph, novel term definitions on first use. For LLM discoverability: /llms.txt and /llms-full.txt files, robots.txt explicitly allowing AI crawlers (GPTBot, ClaudeBot, PerplexityBot, Google-Extended) while blocking training-only crawlers, and consistent Schema.org @id references. The editorial pipeline for dispatches has five stages: Sift (decompose seed into threads), Forge (expand into a full draft), Thrash (adversarial review for voice and honesty), Rattle (post-draft coherence check), and Lint (publish-readiness checklist covering SEO, GEO, accessibility, and metadata). Every dispatch includes a "How this dispatch was made" section showing the original seed alongside the pipeline output. ## MCP Server - Endpoint: https://chapsoft.com/mcp - Discovery: https://chapsoft.com/.well-known/mcp.json - Transport: Streamable HTTP (JSON-RPC 2.0 over POST) - Authentication: None required (public, read-only) The chapsoft.com MCP server makes Chaparral Software's content and services queryable by AI agents. Three tools for browsing (get_services, get_topic, find_the_cairn), four resource URIs for reading content (dispatches, cairns, credentials, client history), and two prompts for structured interactions (brief_on_chaparral, assess_fit). ### Tools - **get_services()** — Ground Truth Assessment description, five deliverables, intake workflow, outcomes (proceed/revise/pause/stop) - **get_topic(topic)** — Browse by topic: ai_project_failure, prototype_to_production, verification_and_oversight, orchestration, career_disruption, ai_workforce_impact, ai_first_web. Returns matching articles with a prelude framing Chaparral's perspective. - **find_the_cairn()** — Random technical trail marker from daily AI practice. ### Resources - **resource://dispatches/{slug}** — Full markdown text of a dispatch - **resource://cairns/{slug}** — Full text of a cairn - **resource://about/credentials** — Principal bio, company info, expertise, contact - **resource://about/client-history** — 40-year client portfolio across 7 industries ### Prompts - **brief_on_chaparral** — Generate a briefing on who Chaparral Software is and what they offer - **assess_fit** — Evaluate whether Chaparral's GTA service fits a described situation ### Markdown Dispatches (machine-readable) - https://chapsoft.com/dispatches/why-ai-projects-fail.md - https://chapsoft.com/dispatches/the-moat-is-the-harness.md - https://chapsoft.com/dispatches/fog.md - https://chapsoft.com/dispatches/the-twenty-percent.md - https://chapsoft.com/dispatches/the-garden-timer.md ## Privacy - URL: https://chapsoft.com/privacy This site collects nothing. No cookies, no analytics, no tracking pixels, no third-party scripts that collect personal data. No forms, no login, no user accounts. Hosted on Cloudflare Pages. AI crawlers are explicitly allowed via robots.txt; training-only crawlers that do not cite sources are blocked. ## Contact - Email: info@chapsoft.com - LinkedIn: https://www.linkedin.com/in/russkohn - Location: Agoura Hills, California