Blog

How decoupling the intelligence layer from the state machine creates a hallucination-free, zero-trust enterprise architecture.

Labels
AI(21) AJAX(112) App Studio(10) Apple(1) Application Builder(245) Application Factory(207) ASP.NET(95) ASP.NET 3.5(45) ASP.NET Code Generator(72) ASP.NET Membership(28) Azure(18) Barcode(2) Barcodes(3) BLOB(18) Business Rules(3) Business Rules/Logic(140) BYOD(13) Caching(2) Calendar(5) Charts(29) Cloud(14) Cloud On Time(2) Cloud On Time for Windows 7(2) Code Generator(54) Collaboration(11) command line(1) Conflict Detection(1) Content Management System(12) COT Tools for Excel(26) CRUD(1) Custom Actions(1) Data Aquarium Framework(122) Data Sheet(9) Data Sources(22) Database Lookups(50) Deployment(22) Designer(178) Device(1) Digital Workforce(3) DotNetNuke(12) EASE(20) Email(6) Features(101) Firebird(1) Form Builder(14) Globalization and Localization(6) HATEOAS(12) How To(1) Hypermedia(3) Inline Editing(1) Installation(5) JavaScript(20) Kiosk(1) Low Code(3) Mac(1) Many-To-Many(4) Maps(6) Master/Detail(36) Micro Ontology(5) Microservices(4) Mobile(63) Mode Builder(3) Model Builder(3) MySQL(10) Native Apps(5) News(18) OAuth(9) OAuth Scopes(1) OAuth2(14) Offline(20) Offline Apps(4) Offline Sync(5) Oracle(11) PKCE(2) Postgre SQL(1) PostgreSQL(2) PWA(2) QR codes(2) Rapid Application Development(5) Reading Pane(2) Release Notes(186) Reports(48) REST(29) RESTful(33) RESTful Workshop(14) RFID tags(1) SaaS(7) Security(81) SharePoint(12) SPA(5) SQL Anywhere(3) SQL Server(26) SSO(1) Stored Procedure(4) Teamwork(15) Tips and Tricks(87) Tools for Excel(3) Touch UI(93) Transactions(5) Tutorials(183) Universal Windows Platform(3) User Interface(337) Video Tutorial(37) Web 2.0(100) Web App Generator(101) Web Application Generator(607) Web Form Builder(40) Web.Config(9) Workflow(28)
Archive
Blog
Monday, March 9, 2026PrintSubscribe
The Carbon LLM: Why Our AI Architecture Passes the "Fax Machine Test"

The AI industry is currently trapped in a maze of its own making.

Silicon Valley is pouring billions of dollars into complex "Multi-Agent Orchestration" frameworks to babysit omnipotent, "God-Mode" AI agents that have a terrifying tendency to hallucinate and break enterprise databases. They are treating AI as if it is the application itself, rather than just another user interacting with the application.

At Code On Time, we took a different approach. We are a legacy Rapid Application Development (RAD) platform with over a decade of experience building the boring-but-critical plumbing of enterprise software: Role-Based Access Control (RBAC), Declarative Security, Web Application Firewalls, and database referential integrity.

It turns out that having this battle-tested IP is like owning a perfectly weighted hammer in a highly competitive field, only to discover that with minor adjustments, your specific hammer is actually perfect for brain surgery. By simply adding a deterministic State Machine and an asynchronous Heartbeat to our core framework, we didn't build a fragile AI wrapper. We invited the AI inside our existing fortress to act as a standard user—a Digital Co-Worker serving as the exact alter-ego of a specific human employee.

It executes the employee's prompt by navigating the application's REST Level 3 API (HATEOAS), which is strictly projected to that specific user's identity.

Mechanically, the built-in state machine passes the LLM the prompt's original goal (as the first item in the state_to_keep array) along with the current resource. It asks the LLM to pick the next hypermedia link that brings the goal closer to resolution, specify the reason "why," provide any optional payload for the link, and dictate what new information must be appended to the state_to_keep ledger. The state machine then physically fetches the links, updates the array, and manages the execution over multiple iterations until the goal is achieved.

To prove that our architecture is structurally sound and ready for enterprise data, we run it through a mental exercise we call The Fax Machine Test.

image1.png

The GPU2027 Bug and the "Carbon LLM"

Imagine a scenario where the "GPU2027 bug" takes down every AI data center on earth, or the major AI providers suddenly raise their API prices by 10,000%. We are forced to fall back on a "Carbon LLM"—a warehouse full of human clerks sitting at desks with fax machines.

If your AI application relies on a SOTA (State of the Art) model to hold the context, execute the business logic, and orchestrate the workflow, your software ceases to exist the moment the servers go down.

Here is exactly what happens to the Digital Co-Worker running on our Axiom Engine: Absolutely nothing breaks. The state machine simply routes the iterations to the warehouse instead of an LLM. Here is how the asynchronous workflow plays out physically, step-by-step:

  1. The Goal Ledger (state_to_keep): The Axiom Engine's heartbeat wakes up, processes the current goal, and prints a fax cover page. This page contains the state_to_keep array—a highly disciplined working memory ledger outlining the original objective and the exact steps taken so far.
  2. The Multiple-Choice Reality (HATEOAS): Following the cover page is the REST Level 3 HATEOAS resource. It shows the current state of the database record and a strict list of hypermedia links (controls). Crucially, it only includes the deterministic controls that this specific user's identity is authorized to click. Options that are illegal simply are not rendered.
  3. Entering the "Infer" State: The server prints the state_to_keep and the HATEOAS resource onto paper. The Axiom Engine marks this prompt iteration as "infer" (signaling that it is waiting for an intelligence to process the data and make a decision) and drops it from the server's heartbeat. While it waits for the fax to be returned, the system consumes zero active compute resources.
  4. Carbon Inference: The fax machine transmits the paper to the "Carbon LLM" (the warehouse clerk pool). A human clerk reviews the goal ledger, analyzes the HATEOAS resource, and circles the most logical link to move toward the goal. They write their next_link_reason on the page and fax it back.
  5. State Machine Resumption: The server receives the return fax. The human operator re-enters the clerk's written reason and specifies the selected link. This instantly updates the prompt iteration status from "infer" back to "next." The heartbeat instantly picks it back up, executes the deterministic transaction, and completes the loop.

The Infinite Context Window (Why We Don't Fax the Chat History)

Any AI engineer reading this is likely laughing. They are thinking: "If a user has been chatting with this AI for six months, are you going to fax a 10,000-page transcript of the conversation every time they ask a question?"

No. That is the "Context Window Trap" that is currently bleeding enterprise AI budgets dry. Most AI wrappers use a Transcript Model—they send the entire chat history to the LLM on every turn, requiring massive, expensive context windows.

Our architecture uses a State Model.

When a human user sends a new prompt to their Digital Co-Worker, the state machine does not append it to a massive chat log. Instead, the state machine starts iterating the next prompt directly on the last iterated HATEOAS resource of the previous prompt.

The "memory" of the conversation does not live in a bloated chat transcript; it lives in the physical state of the database. Because the LLM only ever receives the concise state_to_keep ledger and the current reality of the database record, the context window remains infinitely small and lightning-fast. The fax is never more than a few pages long, yet it creates the illusion of an infinite, multi-year conversation context.

Why the Fax Machine Test Matters

If an AI system cannot pass this test without requiring massive code rewrites, it is fundamentally brittle. By proving that our system works flawlessly with a human and a fax machine, we guarantee three critical advantages for the enterprise:

  • Commoditized Intelligence: The LLM is not the "brain" of your application; it is simply a highly efficient processing unit. If models change, you swap the API key. The database rules, the security perimeter, and the workflow remain completely untouched.
  • The Death of "God Mode" Agents: Because the HATEOAS resource physically removes links that the user's alter-ego doesn't have permissions for, the Carbon LLM (or the Silicon LLM) cannot hallucinate a destructive action. The options simply aren't on the paper.
  • Native Asynchronous Reality: Enterprise work is slow. It requires approvals, context switching, and delays. Our architecture doesn't care if the inference takes 50 milliseconds from an AI or 15 minutes from a fax machine. The state machine handles both with zero idle compute waste.

The Invisible UI

Every database application created with Code On Time is already a multi-user system powered by a Visible UI (HTML). We realized that by replacing the HTML with Hypermedia JSON, we instantly created a multi-agent system powered by an Invisible UI.

You don't need to invent new, complex orchestration layers to keep AI in check. You don't need agents talking to other agents. You just need to hand the AI a standard user interface formatted for its eyes, and let your legacy database enforce the ACID rules just like it has for decades.
Labels: AI, HATEOAS
Monday, March 2, 2026PrintSubscribe
Context Noise: The Silent Killer of Enterprise AI (And How Identity Projection Solves It)

The enterprise AI industry is currently obsessed with "God Mode" orchestration. The prevailing strategy for building autonomous agents relies on decoupled middleware—API Gateways, LangGraph orchestrators, and centralized hubs like Semantic Kernel.

The architecture usually looks like this: Give a massive Large Language Model (LLM) a static OpenAPI schema of your entire enterprise, connect it via a highly privileged service account, and then rely on external firewalls to tackle the agent when it tries to do something it shouldn't.

This approach creates a massive, expensive problem that is suffocating State-of-the-Art (SOTA) agentic workflows: Context Noise.

When an LLM is forced to act as its own compliance officer, it burns compute and context tokens fighting the enterprise infrastructure rather than solving the user's problem.

At Code On Time, we engineered the Digital Co-Worker to reject this paradigm. By leveraging Identity Projection and a Federated Mesh, we eliminate context noise before the prompt is even processed. Here is how we move enterprise AI from probabilistic guessing to deterministic physics.

image1.png

The "Guess My Permissions" Loop

To understand the cure, you must understand the disease. In a decoupled middleware architecture, the LLM is handed a universal instruction manual (the schema) that is completely divorced from the user's actual identity.

Imagine an entry-level warehouse worker asking an AI agent to check inventory. Because the agent's schema is static, it knows the Get_Executive_Financial_Summary tool exists.

  1. The Attempt: The agent formulates a plan and decides the output of a financial tool would provide useful context. It attempts to call the API.
  2. The Catch: The AI Gateway or middleware intercepts the call, checks the user's OAuth token, realizes the warehouse worker lacks clearance, and returns an HTTP 403 Forbidden error.
  3. The Noise: The LLM now has to spend its next cognitive cycle reading the error, apologizing, hallucinating a workaround, or trying a slightly different forbidden tool.

The context window rapidly inflates with failed attempts, error codes, and recovery reasoning. The agent is trapped in a high-latency game of trial-and-error, trying to discover its own boundaries.

The noise is even worse in Retrieval-Augmented Generation (RAG) setups. If a vector search pulls up a confidential document, middleware must retroactively redact the sensitive data before feeding it to the LLM. The agent receives a disjointed, fragmented text block full of [REDACTED] tags, forcing it to burn tokens trying to piece together a coherent thought.

The Antidote: Two-Tiered Noise Reduction

The distinction between catching an unauthorized action and eliminating the context noise is the defining architectural gap in enterprise AI right now.

The Digital Co-Worker solves this by wrapping the live database and projecting a dynamic REST Level 3 Hypermedia (HATEOAS) API based strictly on the user's identity. This creates a zero-noise environment across two distinct tiers.

Tier 1: Macro-Scope Reduction (The Federated Mesh)

Centralized enterprise hubs usually operate as an "Omni-Agent." When a user opens the chat, the agent must load the schemas for the entire enterprise—Warehouse, HR, IT Helpdesk, and Accounting—just to understand what tools are available. This creates immediate, overwhelming token noise.

We apply Domain-Driven Design (DDD) directly to LLM context management via the Federated Mesh.

By starting the prompt inside a specific domain app (e.g., the Warehouse app built with Code On Time), the initial HATEOAS payload is radically constrained. The LLM is only aware of warehouse-related entities. The agent does not spend cognitive effort distinguishing between a "Warehouse Inventory Record" and an "IT Asset Inventory Record," because the IT domain simply does not exist in its current reality.

Tier 2: Micro-Scope Reduction (Identity Projection)

Once inside the domain, the application dynamically filters the HATEOAS API based on the exact OAuth 2.0 token of the user making the request.

If a janitor and the CEO look at the exact same invoice record, the Digital Co-Worker projects two entirely different realities. If the janitor doesn't have the authority to delete the invoice, the delete link is physically omitted from the JSON payload.

The LLM never burns a single token deciding if it should open a forbidden door, because the door does not exist in its reality. The context window remains pristine, containing only what is mathematically possible and legally authorized for that specific user at that exact microsecond.

The "Domain Hop": Transparent Traversal

Enterprise processes rarely live in silos. A typical Return Merchandise Authorization (RMA) process might require hopping from Customer Service, to the Warehouse, to Accounting.

In decoupled middleware, crossing domains requires heavy orchestration: pausing the agent, negotiating a new token exchange, and injecting a massive new schema.

With the Digital Co-Worker, the "hop" is natively embedded in the web's foundational physics:

  1. The Bridge: While inspecting a shipment in the Warehouse domain, the state machine encounters a hypermedia link to the Accounting domain.
  2. The Traverse: The agent executes a standard loopback HTTP GET request against that URL.
  3. The Silent Handshake: Because the apps in the Federated Mesh share an in-house OAuth 2.0 Identity Provider (Entra ID, Google, Okta), the request arrives with a token the Accounting Web Application Firewall (WAF) already trusts.
  4. The Landing: The Accounting app evaluates the token, dynamically projects the permitted data (micro-scope reduction), and returns the pristine new state.

The LLM never orchestrates an authentication flow. It never reasons about network topography. It simply follows a link from one bounded context to another.

Stop Building Brains. Build Better Highways.

The industry is wasting millions trying to build "Smart Drivers" (custom LLMs with massive rulebooks) that inevitably crash on messy enterprise dirt roads.

The Digital Co-Worker provides a "Smart Highway." By offloading identity enforcement and state reduction to the HTTP and API layers, the LLM is freed from acting as a security guard. Its entire token budget is dedicated purely to achieving the user's goal.

You don't need a magical AGENTS.md file to keep your AI on track. You just need solid engineering.
Labels: AI, HATEOAS
Wednesday, February 4, 2026PrintSubscribe
Is It Crazy to Give a Soap Dispenser an AI Concierge?

At first glance, the idea sounds like the punchline to a bad Silicon Valley joke.

“We put an LLM in the restroom soap dispenser.”

It evokes images of over-engineered, $500 smart-home gadgets that require firmware updates just to wash your hands. But at Code On Time, we argue that giving a soap dispenser its own AI Concierge isn't a gimmick. In fact, it is the logical endgame of the Inverted AI architecture—and it might be the smartest operational investment a business can make.

Here is why "The Janitor-Bot" is the future of enterprise automation.

image1.png

The Problem: Paying Humans to "Walk"

In Facility Management, the biggest line item on the P&L is labor. But what are you actually paying for?

If a janitor is paid $20/hour to patrol a building and check 50 restrooms, and only 5 of them actually need supplies, you have effectively paid for 90% waste. You paid a human to walk past 45 perfectly full dispensers just to verify they were full.

Traditional IoT (Internet of Things) tried to solve this with "Dashboards." The sensor sends data to a screen. A red light blinks. But dashboards are passive. If the manager is busy, the red light is ignored, the soap runs out, and the customer experience fails.

The Solution: Asset Agency

The Inverted AI approach changes the equation. We don't want the soap dispenser to just log data. We want it to act.

By utilizing the Device Authorization Flow built into every Code On Time application, we can treat a $10 microcontroller (like a Raspberry Pi Pico) as a secure user.

  1. The Janitor logs into the app on their phone.
  2. They generate a code and enter it into the dispenser.
  3. The dispenser is now authenticated as a "Digital Co-Worker" acting on behalf of that janitor.

Now, the device doesn't just update a database row. It sends a Prompt to the AI Concierge via the universal “chat” endpoint.

While we might imagine the device chatting in plain English, the actual communication is much less informal. To ensure precision, the microcontroller transmits a structured JSON payload representing the "ground truth" of its physical state:

JSON
123456{
  "device_id": "Restroom-94-Dispenser",
  "sensor_value": 0.10,
  "battery_level": "good",
  "timestamp": "2026-02-04T14:30:00Z"
}

The AI Concierge ingests this raw data, combines it with the server-side context (e.g., “It’s only 2 PM and usage is already 90%”), and translates the structured signal into a clear intent:

Device Prompt: “Soap level is at 10%. Usage rate is high today.”

AI Concierge (Server-Side): “Acknowledged. Checking inventory… Stock is available in Closet B. creating Task #902 for Janitor Steve: ‘Refill Restroom 4 immediately.’ Sending SMS alert.”

The dispenser didn't just flash a light. It managed the workflow. It checked inventory, assigned a task, and notified the human—all without a manager getting involved.

The Economics of "Inverted AI"

Why is this feasible now? Because we have flipped the architecture.

In Anthropic's exciting "Computer Use" vision of AI, you need a massive model to visually "look" at screens. That is too expensive for a janitor, let alone a bathroom appliance.

But in the Code On Time Inverted AI:

  • The Hardware is Dumb: A $14 sensor.
  • The Interface is Cheap: Tiny JSON text packets.
  • The Intelligence is Centralized: The "Brain" lives in your Code On Time app, running on a scalable server (and soon, .NET Core containers).

Because the marginal cost of processing a text prompt is fractions of a penny, it creates a new economic reality: Intelligence is now cheap enough to deploy everywhere.

The "Sovereign" Thing

This architecture turns your physical assets into Sovereign Agents.

  • They carry an Identity (via OAuth).
  • They respect Permissions (they can't order supplies if they aren't allowed).
  • They speak Natural Language (or at least, structured prompts that the LLM understands).

When a soap dispenser can truthfully report its state, authenticate securely, and trigger business logic, it stops being a piece of plastic. It becomes a member of the workforce—one that never sleeps, never forgets, and costs almost nothing to employ.

The Verdict

So, is it crazy to give a soap dispenser an AI Concierge?

If you think AI is only for writing poems or code, then yes, it's crazy.

But if you see AI as the ultimate Friction Remover for business operations, it is inevitable.

We are building a future where your business data, your human employees, and your physical infrastructure all inhabit the same secure, conversational mesh. And it starts with a simple prompt: "I need a refill."

Labels: AI, HATEOAS, OAuth2, RESTful