AI

How "Context Contamination" causes catastrophic hallucinations, and why the "Boring" Micro-Ontology is the safest path for Enterprise AI.

Labels
AI(15) AJAX(112) App Studio(10) Apple(1) Application Builder(245) Application Factory(207) ASP.NET(95) ASP.NET 3.5(45) ASP.NET Code Generator(72) ASP.NET Membership(28) Azure(18) Barcode(2) Barcodes(3) BLOB(18) Business Rules(3) Business Rules/Logic(140) BYOD(13) Caching(2) Calendar(5) Charts(29) Cloud(14) Cloud On Time(2) Cloud On Time for Windows 7(2) Code Generator(54) Collaboration(11) command line(1) Conflict Detection(1) Content Management System(12) COT Tools for Excel(26) CRUD(1) Custom Actions(1) Data Aquarium Framework(122) Data Sheet(9) Data Sources(22) Database Lookups(50) Deployment(22) Designer(178) Device(1) Digital Workforce(3) DotNetNuke(12) EASE(20) Email(6) Features(101) Firebird(1) Form Builder(14) Globalization and Localization(6) HATEOAS(6) How To(1) Hypermedia(3) Inline Editing(1) Installation(5) JavaScript(20) Kiosk(1) Low Code(3) Mac(1) Many-To-Many(4) Maps(6) Master/Detail(36) Micro Ontology(5) Microservices(4) Mobile(63) Mode Builder(3) Model Builder(3) MySQL(10) Native Apps(5) News(18) OAuth(9) OAuth Scopes(1) OAuth2(13) Offline(20) Offline Apps(4) Offline Sync(5) Oracle(11) PKCE(2) Postgre SQL(1) PostgreSQL(2) PWA(2) QR codes(2) Rapid Application Development(5) Reading Pane(2) Release Notes(186) Reports(48) REST(29) RESTful(30) RESTful Workshop(14) RFID tags(1) SaaS(7) Security(81) SharePoint(12) SPA(5) SQL Anywhere(3) SQL Server(26) SSO(1) Stored Procedure(4) Teamwork(15) Tips and Tricks(87) Tools for Excel(3) Touch UI(93) Transactions(5) Tutorials(183) Universal Windows Platform(3) User Interface(337) Video Tutorial(37) Web 2.0(100) Web App Generator(101) Web Application Generator(607) Web Form Builder(40) Web.Config(9) Workflow(28)
Archive
Blog
AI
Friday, November 14, 2025PrintSubscribe
The "Mercury" Incident: Why the Global AI Brain is Dangerous

Imagine the scene. A top Sales Director at a major industrial firm opens their AI assistant. They are looking for a critical status update on a potential new client, "Mercury Logistics."

The Director types: "What is the current status of the Mercury Lead?"

The AI pauses for a moment, its "thinking" animation spinning, and then replies with supreme confidence:

"The Mercury Lead is currently unstable and highly toxic. Safety protocols indicate a high risk of contamination during the negotiation phase. Recommend immediate containment protocols."

The Sales Director stares at the screen in horror. Did they just tell the AI to treat a high-value client like a biohazard?

What happened?

The AI didn't break. It did exactly what it was designed to do. It acted as a "Global Brain," searching the company's entire centralized Data Lake for the keywords "Mercury" and "Lead."

The problem was that the company also has a Manufacturing Division that uses the chemical elements Mercury (Hg) and Lead (Pb) in production testing. The AI, lacking context, conflated a "Sales Lead" with a "Heavy Metal," resulting in a catastrophic hallucination.

This is the "Mercury" Incident—a perfect example of why the industry's obsession with monolithic, all-knowing AI systems is a dangerous dead end for the enterprise.

The Problem with the "Genius" Model (The Global Ontology)

The current trend in enterprise AI is to build a "Genius." The promise is seductive: "Dump all your data—from Salesforce, SAP, Jira, and SharePoint—into one massive Vector Database or Data Lake. The AI will figure it out."

This creates a Global Ontology—a unified, but deeply confused, view of the world.

The Failure Mode: Semantic Ambiguity

The root cause of the "Mercury" Incident is Semantic Ambiguity. In a global context, words lose their meaning.

  • In Sales, "Lead" means a potential customer.
  • In Manufacturing, "Lead" means a toxic metal.
  • In HR, "Lead" means a team manager.

When you force an AI to reason over all of these simultaneously, you are inviting disaster. The AI has to guess which definition applies based on subtle clues in your prompt. If it guesses wrong, it hallucinates.

The Hidden Cost: Token Bloat

To fix this, developers have to engage in "Prompt Engineering," feeding the model thousands of words of instructions: "You are a Sales Assistant. When I say 'Lead', I mean a customer, NOT a metal. Ignore data from the Manufacturing database..."

This is expensive. Every time you send that massive instruction block, you are paying for thousands of tokens, slowing down the response, and praying the model doesn't get confused anyway.

The Solution: The "Employee" Model (The Micro-Ontology)

There is a better way. It’s boring, it’s safe, and it mimics how human organizations actually work.

When you walk into a Hospital, you don't ask the receptionist for a pizza quote. You know by the context of the building that you are there for medical issues.

Code On Time applies this same logic to AI through the concept of the Digital Co-Worker and the Micro-Ontology.

Standing in the Right Room

Instead of a single "Global Brain," Code On Time builds a Society of Apps.

  • You have a CRM App.
  • You have a Manufacturing App.
  • You have an HR App.

Each app defines its own universe through a Micro-Ontology, delivered automatically via its HATEOAS API.

Crucially, this isn't a cryptic technical schema. The API entry point faithfully reproduces the Navigation Menu of the visible UI, complete with the same human-friendly labels and tooltips. This places the Co-Worker on the exact same footing as the human user.

Because the AI reads the exact same map as the human, it doesn't need to be "trained" on how to use the app. It just looks at the menu and follows "Sales Leads" because the tooltip says "Manage potential customers."

The Mercury Incident: Solved

Let's replay the scenario with a Code On Time Digital Co-Worker.

Scenario A: The User is in the CRM App. The user logs into the CRM. The Digital Co-Worker inherits their context. The "Manufacturing" database literally does not exist in this world.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the only "Lead" table it can see—the Sales Leads table. There is zero ambiguity.

The Outcome:

"The Lead 'Mercury Logistics' is in the 'Proposal' stage. The closing probability is 60%."

Scenario B: The User is in the Manufacturing App. The user logs into the production floor system.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the Safety Data Sheets.

The Outcome:

"Warning: Detected 'Lead' and 'Mercury' contamination in Lot #404. Status: Quarantine."

By restricting the context to the domain of the application, the hallucination becomes mathematically impossible. The Co-Worker cannot conflate data it cannot see.

The Best of Both Worlds: Federated Scalability

But what if you need data from both systems?

This is where Federated Identity Management (FIM) comes in. It acts as the trusted hallway between your apps.

If the Sales Director intentionally needs to know if "Mercury Logistics" has any outstanding safety violations that might block the deal, they can explicitly ask the Co-Worker to check.

The Co-Worker, using its FIM passport, "walks down the hall" to the Manufacturing App. It enters that new Micro-Ontology, performs the search in that context, and reports back.

This turns "Accidental Contamination" into "Intentional Discovery." It keeps the boundaries clear while still allowing for cross-domain intelligence.

The Verdict: Boring is Safe

The promise of a "Genius AI" that knows everything is a marketing fantasy that leads to expensive, fragile, and dangerous systems.

Enterprises don't need an AI that knows everything. They need an AI that knows where it is.

  • Global Brain: High Cost, High Risk, Unpredictable.
  • Digital Co-Worker: Low Cost, Zero Risk, Deterministic.

By embracing the "boring" architecture of isolated Micro-Ontologies, you don't just save money on tokens. You save yourself from the nightmare of explaining to a client why your AI called them toxic.

Labels: AI, Micro Ontology
Wednesday, November 12, 2025PrintSubscribe
The Fractal Workflow: How AI Builds and Runs Your App

We have talked about the AI Builder for developers and the Digital Co-Worker for end-users. At first glance, these might seem like two different tools (one for coding, one for business).

But they are actually the same engine, running on the same logic.

At Code On Time, we have built a Fractal Architecture that repeats itself at design time and runtime. The way you build the app is exactly the way your users will use it.

The Developer's Loop (Design Time)

When you sit down with the AI Builder in App Studio, the workflow is clear:

  1. Prompt: You state a goal ("Create a Sales Dashboard").
  2. Plan: The Builder analyzes the metadata and presents a Cheat Sheet (a step-by-step plan of action).
  3. Approval: You review the plan. You are in the director's seat. You click "Apply All".
  4. Execution: The Builder operates the App Explorer (the "Invisible UI" of the Studio) to create pages, controllers, and views.
  5. Result: A new feature exists.

You are never locked into the AI. At any moment, you can take the wheel and work directly with the App Explorer. Because the AI Builder uses the exact same tools and follows the exact same tutorials as a human developer, its work is transparent and editable. You can use the Tutor to learn the ropes or the Builder to speed up the heavy lifting, but the manual controls are always at your fingertips

The User's Loop (Runtime)

When your user sits down with their Digital Co-Worker, the workflow is identical:

  1. Prompt: They state a goal ("Approve all pending orders").
  2. Plan: The Co-Worker analyzes the HATEOAS API (the metadata) and formulates a sequence of actions.
  3. Approval: The Co-Worker presents an Interactive Link or summary. "I found 5 orders. Please review and approve."
  4. Execution: The user clicks "Approve," or the Co-Worker executes the API calls directly if permitted.
  5. Result: The business process advances.

End-users have the same flexibility. They can interact with the standard rich user interface, or you can build a custom front-end powered by the HATEOAS API. The Co-Worker prompt is available everywhere: docked inside the app for context, or switched to fullscreen mode for a pure, chat-first experience. You can even configure the app to be 'Headless,' where users interact exclusively via the prompt, or remotely via Email and SMS using secure Device Authorization.

The Field Worker's Loop (Connection-Independent)

The fractal pattern extends to the very edge of the network. When your Field Workers operate in isolation, they aren't just viewing static pages; they are interacting with a complete, local instance of the application logic.

The Setup (Offline Sync): Before the loop begins, the Offline Sync component performs the heavy lifting. Upon login, it analyzes the pages marked as "Offline" and downloads their dependencies. It fetches the JSON Metadata (the compiled definitions of your Controllers) and the Data Rows (Suppliers, Products, Categories), storing them in the device's IndexedDB.

The Runtime Loop:

  1. Prompt: The user taps "New Supplier".
  2. Plan: The Touch UI framework detects the offline context. Instead of calling the server, it activates the Offline Data Processor (ODP). The ODP consults the Local Metadata (the cached JSON controller definition) to understand the form structure.
  3. Approval: The ODP generates the UI instantly. It alters the standard behavior to fit the local context: unlike online forms which require a server round-trip to establish IDs, the ODP renders the createForm1 view with the Products DataView immediately visible.
  4. Execution: Execution: The user enters the supplier name and adds five products to the child grid. The ODP simulates these operations in memory, enforcing integrity by validating the entire "Master + 5 Details" graph as a single unit before allowing the save.
  5. Result: The ODP bundles the master record and child items into a single Transaction Log sequence. It then updates the local state registry (OfflineSync.json) and persists the new data files to IndexedDB. This "checkpoint" ensures that even if the device loses power, the pending work is safe until the user taps Synchronize.

This proves that "Offline" is not just a storage feature; it is a full-fidelity Transactional Workflow powered by the exact same metadata that drives your AI and your Web UI.

The Creator's Loop (Runtime Build)

The fractal pattern goes one step deeper. In the Digital Workforce, the line between "User" and "Developer" blurs.

With Dynamic Data Collection, your business users can define new data structures (Surveys, Audits, Inspections) directly inside the running application, using the same logic you used to build it.

  1. Prompt: The user tells the Co-Worker: "Create a daily fire safety checklist for the warehouse."
  2. Plan: The Co-Worker (acting as a runtime Builder) generates the JSON definition for the survey, effectively "coding" a new form on the fly.
  3. Approval: The user reviews the structure in the Runtime App Explorer - a simplified version of the tool you use in App Studio.
  4. Execution: The definition is saved to the database (not code), instantly deploying the new form to thousands of offline users.
  5. Result: A new business process is materialized without a software deployment.

This proves that the Axiom Engine isn't just a developer tool; it is a ubiquitous creation engine available to everyone in your organization.

Powered by the Axiom Engine

This symmetry is not an accident. It is the Axiom Engine in action.

  • For the Developer: The Axiom Engine navigates the App Structure (Controllers, Pages) to build the software.
  • For the User: The Axiom Engine navigates the App Data (Orders, Customers) to run the business.

By learning to build with the AI, you are simultaneously learning how to deploy it. You aren't just coding; you are training the workforce of the future using the exact same patterns you use to do your job.

You Are the Director

In this fractal architecture, the role of the human (whether developer or end-user) shifts from "Operator" to "Director."

You are not being replaced; you are being promoted. The AI cannot do anything that isn't defined in the platform's "physics."

  • On the Build Side: The App Explorer is the boundary. The AI Builder cannot invent features that don't exist in the App Studio. It can only manipulate the explorer nodes that you can manipulate yourself.
  • On the Run Side: The HATEOAS API is the boundary. The AI Co-Worker cannot invent business actions that aren't defined in your Data Controllers. It can only click the links that you have authorized.
    • However, within that boundary, you have 100% Data Utility. Because the Agent sees exactly what you see, it can answer specific questions like "What is Rob's number?" immediately, provided you have permission to view that data.

The AI provides the labor, but you provide the intent. You direct the show, confident that the actors can only perform the script you wrote.

Labels: AI
Tuesday, November 11, 2025PrintSubscribe
Digital Co-Worker or Genius?

For over a decade, Code On Time has been the fastest way to build secure, database-driven applications for humans. The industry calls this Rapid Application Development (RAD). But recently, we realized that the rigorous, metadata-driven architecture we built for humans is also the perfect foundation for something much more powerful.

Today, we are announcing a shift in our vision. We are not just building interfaces for people anymore. We are evolving from a RAD tool for web apps into a RAD for the Digital Workforce. The same blueprints that drive your user interface are now the key to unlocking the next generation of autonomous, secure Artificial Intelligence.

The Digital Co-Worker (The "Glass Box")

Imagine an app that looks like Chat GPT. This app executes every prompt as if it is operating the "invisible UI" of your own database. Just like the human user, it inspects the menu options, selects data items, presses buttons, and makes notes as it goes. Then it reports back by arranging the notes into an easy-to-understand summary.

This is possible because a developer has designed the app with a real UI for your database. Both the DigitalI "Co-Worker" and the human UI are built from the exact same "blueprints" (called data controllers). These blueprints define the data, actions, and business logic for your application. When a user logs in (using their organization's existing security), the AI "digital employee" inherits your exact identity, meaning it sees only what you see and can only perform the actions available to you.

The AI "navigates" a system that has already been "security-trimmed" by user roles and simple, declarative SQL-based rules. This means if you aren't allowed to see "Salary" data, the AI is never shown the "Salary" option - it doesn't exist for that session. A "heartbeat" process allows these tasks to run 24/7, and the AI's "notes" (its step-by-step log) create a perfect, unchangeable audit trail of every decision it has made.

The Genius (The "Black Box")

Imagine another app that also looks like Chat GPT. To understand your database, this app employs a powerful, sophisticated AI model as its "brain". It operates by first consulting a comprehensive "manifest" - a detailed catalog of every "tool" and data entity it can access. This allows the AI to have a full, upfront understanding of its capabilities, so when you submit a prompt, it can process this entire catalog to create a complete, multi-step plan in a single "one-shot" operation.

This architecture is often built as a flexible, component-based system, which involves deploying several specialized services: one for the chat UI, another for the AI's "brain" (the orchestrator), and a dedicated "server" for each tool. Security is an explicit and granular consideration, requiring careful, deliberate configuration. Each tool-server's permissions must be managed, and the AI "brain" is trusted to orchestrate these tools correctly. This design allows for fine-tuning access (like "read/write all customer data") but means that security and prompt-based access must be actively managed and secured.

This "one-shot" planning model has a clear cost structure: the primary charge is for the single, complex "planning" call to the sophisticated "brain" model, which is required for every prompt. The success of the entire operation relies on the quality of this initial plan. If the AI's plan contains an error (for example, using incorrect database filter syntax) the operation may not complete as intended, and the cost of the "planning" call is incurred. This model prioritizes a powerful, upfront planning phase and depends on the AI's reasoning to be correct the first time.

How to Choose: The Auditable Co-Worker or the "Black Box" Genius

Your choice between the "Digital Co-Worker" and the "Genius" architecture is a strategic decision about what you value most: trust and durability or raw, unconstrained reasoning. The "Digital Co-Worker," built on the CoT framework, is an "invisible UI" operator. Its primary strength is its security-by-design. Because it inherits the user's exact, security-trimmed permissions, it is impossible for it to access data or perform actions it isn't allowed to. It operates within a "fenced-in yard" defined by your business rules. This makes it the perfect, auditable solution for the real-world workflows that require a quick response or need to run reliably for days or even months.

The "Genius" model, built on LLM+MCP, is a "one-shot" planner. Its primary strength is its power to reason over a massive, pre-defined database "map". It's designed for highly complex, one-time questions where the "planning" is the hardest part. This power comes at the cost of security and predictability; you are trusting a "black box" with a full set of tools, and its complex plans can be brittle, expensive, and difficult to audit. This model is best suited for scenarios where the sheer "intelligence" of the answer is more important than the security and durability of the process.

For a business, the choice is clear. The "Digital Co-Worker" is a platform you can build your entire company on. This is where it has a huge advantage: it can operate with a smart model for deep reasoning, but it also works perfectly with a fast, lightweight, and cheap model for 99% of tasks. The "Genius" model, by contrast, requires the most expensive model just to parse its complex manifest. Furthermore, the "Genius" model requires a massive upfront investment, potentially costing hundreds of thousands of dollars in custom development, integration, and security engineering before the first prompt is ever entered. The "Digital Co-Worker" platform, with its "BYOK" model and 100 free digital co-workers, makes it a risk-free, frictionless way to adopt a true workforce multiplier.

The Digital Co-Worker is Not a Chatbot

It is easy to mistake the "Digital Co-Worker" for a chatbot because they both speak your language. However, the difference is fundamental. As industry experts note, standard chatbots are "all talk and no action." They are engines of prediction, trained to guess the next word in a sentence based on frozen knowledge from the past. They can summarize a meeting or write a poem, but they are fundamentally passive observers that cannot touch your business operations.

The Digital Co-Worker is different because it is agentic. It is defined not by what it says, but by its ability to take actions autonomously on a person's behalf. When you give a chatbot a task, it tells you how to do it. When you give a Digital Co-Worker a task, it does it. It acts as an "autonomous teammate," capable of breaking down a high-level goal (like "review all pending orders and expedite shipping for anything delayed by more than two days") into a series of concrete steps and executing them without needing you to hold its hand.

This distinction changes the return on investment entirely. A chatbot is a tool for drafting text; a Digital Co-Worker is a tool for finishing jobs. It doesn't just help you draft an email to a client; it finds the client in the database, checks their order status, drafts the response, and with your permission, sends it. It moves beyond conversation into orchestration, bridging the gap between your intent and the complex reality of your database transactions.

The Co-Worker's "Glass Box": A Look Inside the HATEOAS State Machine

The "AI Co-Worker" operates by acting as a "digital human," using the application's REST Level 3 (HATEOAS) API as its "invisible UI." The entire process is driven by a built-in State Machine (SM). When a prompt is submitted, the SM's "heartbeat" processor wakes up. Its only "worldview" is the HATEOAS API response. It uses a fast, lightweight LLM (like Gemini Flash) to read the _links (the "buttons") and hints (the "tooltips") to decide the next logical step. As it works, it "makes notes" in its state_array, which serves as both its "memory" and a perfect, unchangeable audit log. This is how it auto-corrects: if an API call fails, the API returns the error with the _schema, which is just the next "note" in the log, allowing the AI to build a correct query in the next iteration.

This "glass box" model is inherently secure. The HATEOAS API is not a static catalog; it is "security-trimmed" by the server before the AI ever sees it. The app's engine uses declarative rules (like SACR) to filter the data and remove links to any actions the user isn't allowed to perform. If you don't have permission to "Approve" an order, the Digital Co-Worker will not see an "approve" link. The guardrails are not a suggestion; they are an architectural-level boundary, making it impossible for the AI to go rogue.

This architecture also enables true, durable autonomy. The "heartbeat" that runs the SM is designed to handle tasks that last for months. A user can "pause" or "resume" an agent simply by issuing a new prompt, as the AI can see and follow the pause link on its own "task" resource. Because the AI can also discover links to create new prompts (e.g., rel: "create_new_prompt" in the menu), a "smart" agent can decompose a complex prompt ("review 500 contracts") into 500 "child" tasks, which the heartbeat then patiently executes in parallel.

Beyond the Database: The Universal Interface

The power of the Digital Co-Worker extends far beyond the SQL database. The same "blueprints" (data controllers) that define your customer tables can also define "API Entities" (virtual tables that connect to external systems like SharePoint, Google Drive, or third-party CRMs).

To the AI, these external sources look exactly like the rest of the "invisible UI." It doesn't need to learn a new API, manage complex keys, or navigate different security protocols. It simply follows a link to "Documents" or "Spreadsheets" in its menu, and the application's engine handles the complex connection logic behind the scenes, presenting the external data as just another set of rows and actions.

This solves the single hardest problem in enterprise AI: secure access to unstructured data. Just like with the database, the system applies declarative security rules to these external sources. If a user is only allowed to see SharePoint files they created, the Digital Co-Worker will only discover those specific files. It enables a secure, federated search and action capability (allowing the AI to "read" a contract PDF and "update" a database record in one smooth motion) without ever exposing the organization's entire document repository to a "black box."

The Future is Built-In: Rapid Agent Development

The age of the expensive, brittle "Genius" AI is ending. The age of the secure, durable "Digital Co-Worker" has arrived. We believe that building a Digital Workforce shouldn't require a team of data scientists and six months of integration; it should be a standard feature of your application platform.

In our upcoming releases, we are delivering the tools to make this a reality. By simply building your application as you always have, you will be simultaneously architecting the secure, HATEOAS-driven environment where your Digital Co-Workers will live and work, powered by the Axiom Engine. Your database is ready to talk. Stay tuned for our updated roadmap - the workforce is coming under the full control and permission of the human user.

Labels: AI, RESTful