Blog: Posts from November, 2025

How "Context Contamination" causes catastrophic hallucinations, and why the "Boring" Micro-Ontology is the safest path for Enterprise AI.

Labels
AI(10) AJAX(112) App Studio(10) Apple(1) Application Builder(245) Application Factory(207) ASP.NET(95) ASP.NET 3.5(45) ASP.NET Code Generator(72) ASP.NET Membership(28) Azure(18) Barcode(2) Barcodes(3) BLOB(18) Business Rules(3) Business Rules/Logic(140) BYOD(13) Caching(2) Calendar(5) Charts(29) Cloud(14) Cloud On Time(2) Cloud On Time for Windows 7(2) Code Generator(54) Collaboration(11) command line(1) Conflict Detection(1) Content Management System(12) COT Tools for Excel(26) CRUD(1) Custom Actions(1) Data Aquarium Framework(122) Data Sheet(9) Data Sources(22) Database Lookups(50) Deployment(22) Designer(178) Device(1) Digital Workforce(3) DotNetNuke(12) EASE(20) Email(6) Features(101) Firebird(1) Form Builder(14) Globalization and Localization(6) HATEOAS(2) How To(1) Hypermedia(2) Inline Editing(1) Installation(5) JavaScript(20) Kiosk(1) Low Code(3) Mac(1) Many-To-Many(4) Maps(6) Master/Detail(36) Micro Ontology(4) Microservices(4) Mobile(63) Mode Builder(3) Model Builder(3) MySQL(10) Native Apps(5) News(18) OAuth(9) OAuth Scopes(1) OAuth2(13) Offline(20) Offline Apps(4) Offline Sync(5) Oracle(11) PKCE(2) Postgre SQL(1) PostgreSQL(2) PWA(2) QR codes(2) Rapid Application Development(5) Reading Pane(2) Release Notes(185) Reports(48) REST(29) RESTful(30) RESTful Workshop(15) RFID tags(1) SaaS(7) Security(81) SharePoint(12) SPA(6) SQL Anywhere(3) SQL Server(26) SSO(1) Stored Procedure(4) Teamwork(15) Tips and Tricks(87) Tools for Excel(3) Touch UI(93) Transactions(5) Tutorials(183) Universal Windows Platform(3) User Interface(338) Video Tutorial(37) Web 2.0(100) Web App Generator(101) Web Application Generator(607) Web Form Builder(40) Web.Config(9) Workflow(28)
Archive
Blog
Posts from November, 2025
Friday, November 14, 2025PrintSubscribe
The "Mercury" Incident: Why the Global AI Brain is Dangerous

Imagine the scene. A top Sales Director at a major industrial firm opens their AI assistant. They are looking for a critical status update on a potential new client, "Mercury Logistics."

The Director types: "What is the current status of the Mercury Lead?"

The AI pauses for a moment, its "thinking" animation spinning, and then replies with supreme confidence:

"The Mercury Lead is currently unstable and highly toxic. Safety protocols indicate a high risk of contamination during the negotiation phase. Recommend immediate containment protocols."

The Sales Director stares at the screen in horror. Did they just tell the AI to treat a high-value client like a biohazard?

What happened?

The AI didn't break. It did exactly what it was designed to do. It acted as a "Global Brain," searching the company's entire centralized Data Lake for the keywords "Mercury" and "Lead."

The problem was that the company also has a Manufacturing Division that uses the chemical elements Mercury (Hg) and Lead (Pb) in production testing. The AI, lacking context, conflated a "Sales Lead" with a "Heavy Metal," resulting in a catastrophic hallucination.

This is the "Mercury" Incident—a perfect example of why the industry's obsession with monolithic, all-knowing AI systems is a dangerous dead end for the enterprise.

The Problem with the "Genius" Model (The Global Ontology)

The current trend in enterprise AI is to build a "Genius." The promise is seductive: "Dump all your data—from Salesforce, SAP, Jira, and SharePoint—into one massive Vector Database or Data Lake. The AI will figure it out."

This creates a Global Ontology—a unified, but deeply confused, view of the world.

The Failure Mode: Semantic Ambiguity

The root cause of the "Mercury" Incident is Semantic Ambiguity. In a global context, words lose their meaning.

  • In Sales, "Lead" means a potential customer.
  • In Manufacturing, "Lead" means a toxic metal.
  • In HR, "Lead" means a team manager.

When you force an AI to reason over all of these simultaneously, you are inviting disaster. The AI has to guess which definition applies based on subtle clues in your prompt. If it guesses wrong, it hallucinates.

The Hidden Cost: Token Bloat

To fix this, developers have to engage in "Prompt Engineering," feeding the model thousands of words of instructions: "You are a Sales Assistant. When I say 'Lead', I mean a customer, NOT a metal. Ignore data from the Manufacturing database..."

This is expensive. Every time you send that massive instruction block, you are paying for thousands of tokens, slowing down the response, and praying the model doesn't get confused anyway.

The Solution: The "Employee" Model (The Micro-Ontology)

There is a better way. It’s boring, it’s safe, and it mimics how human organizations actually work.

When you walk into a Hospital, you don't ask the receptionist for a pizza quote. You know by the context of the building that you are there for medical issues.

Code On Time applies this same logic to AI through the concept of the Digital Co-Worker and the Micro-Ontology.

Standing in the Right Room

Instead of a single "Global Brain," Code On Time builds a Society of Apps.

  • You have a CRM App.
  • You have a Manufacturing App.
  • You have an HR App.

Each app defines its own universe through a Micro-Ontology, delivered automatically via its HATEOAS API.

Crucially, this isn't a cryptic technical schema. The API entry point faithfully reproduces the Navigation Menu of the visible UI, complete with the same human-friendly labels and tooltips. This places the Co-Worker on the exact same footing as the human user.

Because the AI reads the exact same map as the human, it doesn't need to be "trained" on how to use the app. It just looks at the menu and follows "Sales Leads" because the tooltip says "Manage potential customers."

The Mercury Incident: Solved

Let's replay the scenario with a Code On Time Digital Co-Worker.

Scenario A: The User is in the CRM App. The user logs into the CRM. The Digital Co-Worker inherits their context. The "Manufacturing" database literally does not exist in this world.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the only "Lead" table it can see—the Sales Leads table. There is zero ambiguity.

The Outcome:

"The Lead 'Mercury Logistics' is in the 'Proposal' stage. The closing probability is 60%."

Scenario B: The User is in the Manufacturing App. The user logs into the production floor system.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the Safety Data Sheets.

The Outcome:

"Warning: Detected 'Lead' and 'Mercury' contamination in Lot #404. Status: Quarantine."

By restricting the context to the domain of the application, the hallucination becomes mathematically impossible. The Co-Worker cannot conflate data it cannot see.

The Best of Both Worlds: Federated Scalability

But what if you need data from both systems?

This is where Federated Identity Management (FIM) comes in. It acts as the trusted hallway between your apps.

If the Sales Director intentionally needs to know if "Mercury Logistics" has any outstanding safety violations that might block the deal, they can explicitly ask the Co-Worker to check.

The Co-Worker, using its FIM passport, "walks down the hall" to the Manufacturing App. It enters that new Micro-Ontology, performs the search in that context, and reports back.

This turns "Accidental Contamination" into "Intentional Discovery." It keeps the boundaries clear while still allowing for cross-domain intelligence.

The Verdict: Boring is Safe

The promise of a "Genius AI" that knows everything is a marketing fantasy that leads to expensive, fragile, and dangerous systems.

Enterprises don't need an AI that knows everything. They need an AI that knows where it is.

  • Global Brain: High Cost, High Risk, Unpredictable.
  • Digital Co-Worker: Low Cost, Zero Risk, Deterministic.

By embracing the "boring" architecture of isolated Micro-Ontologies, you don't just save money on tokens. You save yourself from the nightmare of explaining to a client why your AI called them toxic.

Labels: AI, Micro Ontology
Friday, November 14, 2025PrintSubscribe
The Shortest Path to Agentic AI is the App You Already Know How to Build

Every CIO is currently asking the same question: "Which of our processes are addressable by AI?"

The answer is usually: "All of them, but it will cost millions to build the custom agents to do it."

The industry has convinced us that building AI Agents requires a completely new stack: Vector Databases, Python orchestration, RAG pipelines, and complex prompt engineering. For the average developer, this looks like a mountain of new skills to learn and a minefield of security risks to manage.

But what if the perfect environment for an AI Agent wasn't a vector database? What if it was the Traditional Web Application you have been building for decades?

The Digital Workforce Platform makes it possible.

Good for Humans = Good for AI

At Code On Time, our philosophy is simple: If an application is easy for a human to understand, it is easy for an AI to operate.

Think about what makes a "Traditional App" good for a human user:

  1. Clear Navigation: A menu structure that groups related tasks.
  2. Structured Data: Grids that allow filtering, sorting, and finding specific records.
  3. Safety Rails: Forms that validate input and buttons that only appear when an action is allowed.

It turns out, these are the exact same things an AI Agent needs to function reliably.

How the Platform Works: The "Invisible" Translation

When you build a standard database application with the Builder Edition, you aren't just drawing screens for people. You are constructing the Axiom Engine's map.

  1. You Build for Humans: You connect your database (SQL Server, Oracle, etc.). You arrange the navigation. You define "Friendly" labels for your cryptic columns. You set up the security roles.
  2. We Build for Agents: The Axiom Engine automatically projects your human UI into a HATEOAS API (the "Invisible UI").
    • Your Menu becomes the Agent's "Map."
    • Your Grids become the Agent's "Search Engine."
    • Your Forms become the Agent's "Toolbox."
    • Your Security Roles become the Agent's "Guardrails."

You don't need to "teach" the AI how to approve an order. You just build the "Approve" button for your human manager. The Axiom Engine automatically teaches the Digital Co-Worker how to press it.

Inherently Secure by Design

In most "Agentic" architectures, the AI is given a "Master Key" (a service account) to the database, and the developer has to write complex prompts to tell it what not to do. This is a recipe for disaster.

Code On Time flips this model. The Digital Co-Worker doesn't run as a "System Administrator"; it runs as the user.

  • Identity Inheritance: If a Sales Representative logs in and asks the Co-Worker for help, the Agent inherits that Rep’s exact identity, roles, and access tokens.
  • Zero Trust Navigation: The Agent cannot "hallucinate" a path to sensitive HR data because the HATEOAS API simply does not render those links for that user. It isn't just that the Agent shouldn't click them; it's that the Agent cannot see them.

This means your existing security investment (your roles, your permissions, and your Static Access Control Rules) instantly applies to your AI workforce. You don't need to build a separate "AI Firewall"; your application is the firewall.

No New Skills Required

This approach solves the "AI Skills Gap" overnight.

  • You don't need to be a Prompt Engineer. You have the AI Tutor and AI Builder inside the App Studio to help you configure the app using natural language. If you are an experienced Code On Time developer, you can operate the brand new App Explorer on day one, as you already know your way around.
  • You don't need to be a Security Expert. Since the Agent runs as the user, it inherits the exact same security rules you defined for the human. It physically cannot see data or perform actions that the user couldn't do themselves.
  • You don't need a massive budget. You are running on your own infrastructure ("localhost" or on-prem server), connecting to your own database, using your own LLM key. There is no "middleman tax."

Scalable by Default

The "Traditional App" model is inherently scalable because it is stateless. Whether you have one human user or 100 Digital Co-Workers hitting your database, the application server handles the load using standard, proven web technologies.

Because the Agent is "browsing" your app just like a human (but faster), you can debug it just like a human. If the Agent gets stuck, you don't need to inspect a neural network; you just switch to API Mode in the App Studio and see exactly which link was missing.

Any Model, Any Budget

Because the Axiom Engine (a built-in component of your app) provides such a structured, deterministic environment, you are not locked into using the most expensive "Genius" models for every task.

In a typical "Genius" architecture, you need a massive model (like GPT-4o or Gemini 2.5 Pro) just to understand the chaotic environment. In the Digital Workforce Platform, the environment is already organized.

  • Use Cheap Models for Speed: Because the HATEOAS API provides clear "Next Step" links, smaller, faster, and cheaper models (like Gemini Flash) can navigate your app with incredible accuracy and speed.
  • Use Smart Models for Reasoning: Save the expensive models for complex analysis, while letting the cheap models handle the navigation and data entry.
  • Bring Your Own Key (BYOK): You choose the model provider. Whether it’s OpenAI, Google, or Anthropic, the platform adapts. You are never locked into a vendor, and you benefit immediately as model prices drop.
  • Role-Based Governance: You define the rules of engagement. In the app settings, you can assign premium models to high-value roles (like "Managers") while restricting broader roles to cost-effective models. You can even set hard limits on cost-per-prompt and execution time to ensure your AI budget never breaks.

Conclusion

The industry is trying to reinvent the wheel by building complex "Agentic Frameworks" from scratch. We believe the wheel was already working fine.

By treating your Traditional Application as the source of truth, Code On Time turns your existing development team into AI Architects. You don't need to rebuild your business for AI. You just need to give your database a voice.

Download the Builder Edition today, connect your database, and meet your new Co-Workers.

Thursday, November 13, 2025PrintSubscribe
The Three Modes of Rapid Agent Development

Building a Digital Workforce requires more than just a form designer. It requires a platform that allows you to see your application through multiple lenses: the eyes of the human user, the perspective of the frontend developer consuming your API, and the logic of the system architect.

In our upcoming release, the App Studio (the heart of the Code On Time platform) is evolving. The App Studio is already a state-of-the-art Rapid Application Development environment featuring live app UI inspection and the powerful App Explorer, now supercharged by the AI Tutor and Builder. We are introducing three distinct operational modes that make Rapid Agent Development (RAD) a reality.

Whether you are "vibe coding" a UI, debugging a state machine, or writing complex business logic, the Studio now adapts to your workflow.

1. App Mode: The Visual Builder

For the Human Interface.

This is the "Live Mode" our customers first experienced in the December 2024 release. You interact with your running application exactly as your end-users will.

  • Live Inspection: Click on any field, button, or grid in the running app to instantly locate it in the configuration hierarchy.
  • "Vibe Coding": Use the AI Builder to prompt changes ("Move the status field to the top," "Color code high-value orders") and watch them apply instantly to the live UI.
  • Purpose: This mode ensures that the "Invisible UI" (your Data Controllers) produces a beautiful, functional Visible UI for your human users.

2. API Mode: The Agent's View

For the AI Architect.

This is the killer feature for the agentic era. When you switch to API Mode, the "grids and forms" disappear. In their place, the App Studio renders a live, interactive "Documentation View" of your HATEOAS resources.

  • Visualizing the State Machine: Instead of guessing what the AI sees, you see it. The HATEOAS links (rel: "approve", rel: "cancel") are rendered as clickable actions.
  • Live Inspection: Just as you inspect UI elements in App Mode, you can inspect the API documentation. Click on any resource link or action in the documentation view to instantly locate its definition in the App Explorer.
  • Live Debugging: You can click through the API just like an agent (or a custom frontend developer) would. If a link is missing, you know immediately that a security rule or business logic filter has hidden it.
  • Zero-Code Interaction: You can follow the POST and PATCH links directly from the documentation with auto-complete support for JSON bodies, allowing you to test complex transactions without writing scripts.

A Note on Postman: Trust but Verify

API Mode is optimized for exploration and design. It does not replace industry-standard tools like Postman. In fact, it is the perfect companion.

While API Mode lets you see the "Agent's World" from the inside, Postman lets you verify it from the point of view of a frontend developer. Every Code On Time app comes with a built-in OAuth 2.0 Authorization Server. You can easily connect Postman (or any other similar tool) using the Authorization Code flow with PKCE to build custom front-ends, to simulate a truly external mobile app, or third-party integration. Use API Mode to build the logic; use Postman to prove it works for the outside world.

3. Workspace Mode: The Logic Lab

For the Deep Work.

Sometimes things break and you need to fix them. Or sometimes, you just need a focused environment to write code.

In the standard App and API modes, business rules and text properties open as modal popups over the live interface. This is great for quick edits, but for deep work, you need more space.

Workspace Mode decouples the development environment from the live application. It runs as a separate, stable web app on your local machine.

  • Permanent Context: The App Explorer and Business Rules are displayed as permanent tabs that remain open until you close them, rather than temporary modals.
  • The "Safe Haven": You can perform massive architectural changes that might temporarily break the runtime, all while having full access to your configuration tools.

Digital Workforce

No matter which mode you choose (App, API, or Workspace) the Digital Workforce is always available. You can invoke the Tutor for free help or the Builder for credit-powered automation at any time.

In App and API modes, the Tutor and Builder are a click away. In Workspace Mode, they get their own permanent tab, allowing you to keep a long-running "chat with your development workforce" open alongside your business logic and configuration trees.

Conclusion

Rapid Agent Development isn't just about generating code; it's about having the right visibility at the right time. With App Mode, you build for humans. With API Mode, you build for agents. With Workspace Mode, you get a traditional coding experience for deep logic that powers them both.

Welcome to the new standard of development.

image1.png

This is the"Switch Application View”.menu. The App Studio remembers your selection and will activate your preferred mode of application development. If the app is broken,then the “Workspace” mode is the only way to revive it.

image2.png

The screenshot displays the App Explorer following a live inspection of the "Supplier Company Name" column header. The image shows the attached hierarchy and properties side-by-side, with the "Label" property of the "ProductName" field selected. A brief description explains the property's purpose. Tabs within the title grant quick access to "Settings", “Data”, "Models", "Controllers", and "Pages". The right side of the title contains buttons for "Search", “Ask Builder”, "Display Hierarchy as Table", "Split Vertically", and "Close".