Micro Ontology

Why the fastest way to teach AI your business is to build an app.

Labels
AI(8) AJAX(112) App Studio(10) Apple(1) Application Builder(245) Application Factory(207) ASP.NET(95) ASP.NET 3.5(45) ASP.NET Code Generator(72) ASP.NET Membership(28) Azure(18) Barcode(2) Barcodes(3) BLOB(18) Business Rules(2) Business Rules/Logic(140) BYOD(13) Caching(2) Calendar(5) Charts(29) Cloud(14) Cloud On Time(2) Cloud On Time for Windows 7(2) Code Generator(54) Collaboration(11) command line(1) Conflict Detection(1) Content Management System(12) COT Tools for Excel(26) CRUD(1) Custom Actions(1) Data Aquarium Framework(122) Data Sheet(9) Data Sources(22) Database Lookups(50) Deployment(22) Designer(178) Device(1) Digital Workforce(3) DotNetNuke(12) EASE(20) Email(6) Features(101) Firebird(1) Form Builder(14) Globalization and Localization(6) HATEOAS(1) How To(1) Hypermedia(2) Inline Editing(1) Installation(5) JavaScript(20) Kiosk(1) Low Code(3) Mac(1) Many-To-Many(4) Maps(6) Master/Detail(36) Micro Ontology(3) Microservices(4) Mobile(63) Mode Builder(3) Model Builder(3) MySQL(10) Native Apps(5) News(18) OAuth(9) OAuth Scopes(1) OAuth2(13) Offline(20) Offline Apps(4) Offline Sync(5) Oracle(11) PKCE(2) Postgre SQL(1) PostgreSQL(2) PWA(2) QR codes(2) Rapid Application Development(5) Reading Pane(2) Release Notes(185) Reports(48) REST(29) RESTful(30) RESTful Workshop(15) RFID tags(1) SaaS(7) Security(81) SharePoint(12) SPA(6) SQL Anywhere(3) SQL Server(26) SSO(1) Stored Procedure(4) Teamwork(15) Tips and Tricks(87) Tools for Excel(3) Touch UI(93) Transactions(5) Tutorials(183) Universal Windows Platform(3) User Interface(338) Video Tutorial(37) Web 2.0(100) Web App Generator(101) Web Application Generator(607) Web Form Builder(40) Web.Config(9) Workflow(28)
Archive
Blog
Micro Ontology
Monday, December 1, 2025PrintSubscribe
Building the Enterprise Ontology for AI

If you ask a consultant how to make your enterprise data "AI-Ready," they will likely give you a quote for $500,000 and a 12-month timeline. Their plan will involve:

  1. Extracting petabytes of data into a centralized Data Lake.
  2. Training or Fine-Tuning a custom model (at massive GPU cost).
  3. Building Robots (RPA) to click buttons on your legacy screens.
  4. Hiring Teams of data scientists to clean and tag the mess.

There is a faster, safer, and less expensive way. It creates an ontology that is instantly ready for both AI and human consumption, without moving a single byte of data.

The secret? Build a Database Web App.

The App Is The Ontology

In the Code On Time ecosystem, an application is not just a collection of screens. It is a Micro-Ontology—a self-contained, structured definition of a specific business domain (e.g., Sales, HR, Inventory).

When you use the App Studio to visually drag-and-drop fields, define lookup lists, and configure business rules, you are doing something profound:

  • For Humans: You are building a high-quality, Touch UI application to manage the data.
  • For AI: You are defining the Entities, Relationships, and Physics of that domain.

The platform automatically compiles your visual design into a HATEOAS API—the "Invisible UI" for Artificial Intelligence. This API doesn't just serve data; it serves meaning. It tells the AI exactly what an object is, how it relates to others, and—crucially—what actions are legally allowed at this exact second.

Infrastructure Zero

The beauty of this approach is simplicity. You do not need a Vector Database cluster, a GPU farm, or an expensive SaaS integration platform.

The deployed web app is the only infrastructure you need.

Once you hit "Publish," your app functions as a Micro-Ontology Host. It is instantly live.

  • Humans log in via the browser to verify data and approve tasks.
  • Digital Co-Workers connect via the API to analyze data and execute workflows.

They share the same brain, the same rules, and the same database connection.

From Micro to Macro: The Federated Mesh

The biggest mistake companies make is trying to build a "Monolith"—one giant brain that knows everything. This creates security risks and hallucination loops (e.g., confusing "Sales Leads" with "Lead Poisoning").

We propose a Gradual Architecture.

  1. Start Small: Build a "Sales App." It is the Micro-Ontology for customers and deals.
  2. Expand: Build an "Inventory App." It is the Micro-Ontology for products and stock.
  3. Federate: Use our built-in Federated Identity Management (FIM) to link them.

With FIM, a Digital Co-Worker in the Sales App can seamlessly "hop" to the Inventory App to check stock levels. It carries the User's Identity across the boundary, ensuring it only sees what that specific user is allowed to see. You build a Unified Enterprise Ontology not by centralizing data, but by connecting it.

Safety and Cost Control

This architecture solves the two biggest fears of the CIO: Runaway Costs and Data Gravity.

  • Identity-Based Constraints: The AI runs as the user. It is limited by the user's role, ensuring it cannot access sensitive HR files or approve unbudgeted expenses.
  • Cost Containment: You control the loop. You define how many iterations the Co-Worker can run, how much time it can spend, and which LLM flavor (GPT-4o, Gemini, Claude) it uses.
  • Zero Data Gravity: With our BYOK (Bring Your Own Key) model, you pay your AI provider directly for consumption. Your data stays in your database. It is never trained into a public model, and you are never locked into our platform.

Stop Training. Start Building.

You don't need expensive consultants to interpret your data. You know your business.

Use App Studio to define it. Use the Micro-Ontology Factory to deploy it. And let your Digital Workforce run it.

Learn more about the Micro-Ontology Factory.
Sunday, November 23, 2025PrintSubscribe
Stop Building Data Lakes. Start Building a Knowledge Mesh.

For the last decade, the standard advice for Enterprise Intelligence was simple: "Put everything in one place." We spent millions building Data Warehouses and Data Lakes. Now, in the AI era, we are trying to dump those lakes into Vector Databases to create a "Global Ontology" for our LLMs.

It isn't working.

Centralizing data strips it of its context. To a Data Lake, a "Lead" in Sales looks exactly like "Lead" in Manufacturing. To an AI, that ambiguity is a hallucination waiting to happen. Furthermore, a passive database cannot enforce rules. It can tell an AI what the budget is, but it cannot stop the AI from spending it.

The future of Enterprise AI is not Monolithic; it is Federated.

1. The Unit of Intelligence: The Micro-Ontology

At Code On Time, we believe the best way to model the enterprise is to respect its natural boundaries. Do not mash HR and Inventory data together.

Instead, build Micro-Ontologies.

A Micro-Ontology is a self-contained unit of Data, Logic, and Security. In the Code On Time platform, every application you build is automatically a Micro-Ontology.

  • It Speaks "Machine": The Axiom Engine automatically generates a HATEOAS API (The Invisible UI) that describes the data structure to the AI in real-time.
  • It Enforces Physics: Unlike a passive database, a Micro-Ontology enforces business logic. If an invoice cannot be approved, the API removes the approve link. The AI physically cannot hallucinate an illegal action.
  • It Enforces Security: It carries its own ACLs and Static Access Control Rules (SACR). It doesn't rely on a central guardrail; it protects itself.

2. From Micro to Macro: The Federated Mesh

So, how do you get a Full Enterprise Ontology without building a monolith? You connect the nodes.

We utilize Federated Identity Management (FIM) to stitch these Micro-Ontologies together into a Knowledge Mesh.

  • The Link: A "Sales App" (Micro-Ontology A) can define a virtual link to the "Inventory App" (Micro-Ontology B).
  • The Traversal: When your Digital Co-Worker needs to check stock levels for a customer, it seamlessly "hops" from the Sales API to the Inventory API.
  • The Identity: Crucially, it carries the User's Identity across the gap. The Inventory app knows exactly who is asking and enforces its local security rules.

3. Control is the Missing Link

The definition of an "AI Ontology" usually stops at inference—helping the machine understand. We go one step further: Control.

A Full Ontology built with Code On Time is an Executable system. It allows you to deploy a fleet of thousands of Digital Co-Workers who don't just analyze the enterprise—they operate it. They can read the Sales Ontology to find a deal, cross-reference the Legal Ontology to check compliance, and execute a transaction in the Finance Ontology to book the revenue.

And they do it all without you ever moving a single byte of data into a central lake.

Build your first Micro-Ontology today. Your Digital Workforce is waiting.
Labels: AI, Micro Ontology
Friday, November 14, 2025PrintSubscribe
The "Mercury" Incident: Why the Global AI Brain is Dangerous

Imagine the scene. A top Sales Director at a major industrial firm opens their AI assistant. They are looking for a critical status update on a potential new client, "Mercury Logistics."

The Director types: "What is the current status of the Mercury Lead?"

The AI pauses for a moment, its "thinking" animation spinning, and then replies with supreme confidence:

"The Mercury Lead is currently unstable and highly toxic. Safety protocols indicate a high risk of contamination during the negotiation phase. Recommend immediate containment protocols."

The Sales Director stares at the screen in horror. Did they just tell the AI to treat a high-value client like a biohazard?

What happened?

The AI didn't break. It did exactly what it was designed to do. It acted as a "Global Brain," searching the company's entire centralized Data Lake for the keywords "Mercury" and "Lead."

The problem was that the company also has a Manufacturing Division that uses the chemical elements Mercury (Hg) and Lead (Pb) in production testing. The AI, lacking context, conflated a "Sales Lead" with a "Heavy Metal," resulting in a catastrophic hallucination.

This is the "Mercury" Incident—a perfect example of why the industry's obsession with monolithic, all-knowing AI systems is a dangerous dead end for the enterprise.

The Problem with the "Genius" Model (The Global Ontology)

The current trend in enterprise AI is to build a "Genius." The promise is seductive: "Dump all your data—from Salesforce, SAP, Jira, and SharePoint—into one massive Vector Database or Data Lake. The AI will figure it out."

This creates a Global Ontology—a unified, but deeply confused, view of the world.

The Failure Mode: Semantic Ambiguity

The root cause of the "Mercury" Incident is Semantic Ambiguity. In a global context, words lose their meaning.

  • In Sales, "Lead" means a potential customer.
  • In Manufacturing, "Lead" means a toxic metal.
  • In HR, "Lead" means a team manager.

When you force an AI to reason over all of these simultaneously, you are inviting disaster. The AI has to guess which definition applies based on subtle clues in your prompt. If it guesses wrong, it hallucinates.

The Hidden Cost: Token Bloat

To fix this, developers have to engage in "Prompt Engineering," feeding the model thousands of words of instructions: "You are a Sales Assistant. When I say 'Lead', I mean a customer, NOT a metal. Ignore data from the Manufacturing database..."

This is expensive. Every time you send that massive instruction block, you are paying for thousands of tokens, slowing down the response, and praying the model doesn't get confused anyway.

The Solution: The "Employee" Model (The Micro-Ontology)

There is a better way. It’s boring, it’s safe, and it mimics how human organizations actually work.

When you walk into a Hospital, you don't ask the receptionist for a pizza quote. You know by the context of the building that you are there for medical issues.

Code On Time applies this same logic to AI through the concept of the Digital Co-Worker and the Micro-Ontology.

Standing in the Right Room

Instead of a single "Global Brain," Code On Time builds a Society of Apps.

  • You have a CRM App.
  • You have a Manufacturing App.
  • You have an HR App.

Each app defines its own universe through a Micro-Ontology, delivered automatically via its HATEOAS API.

Crucially, this isn't a cryptic technical schema. The API entry point faithfully reproduces the Navigation Menu of the visible UI, complete with the same human-friendly labels and tooltips. This places the Co-Worker on the exact same footing as the human user.

Because the AI reads the exact same map as the human, it doesn't need to be "trained" on how to use the app. It just looks at the menu and follows "Sales Leads" because the tooltip says "Manage potential customers."

The Mercury Incident: Solved

Let's replay the scenario with a Code On Time Digital Co-Worker.

Scenario A: The User is in the CRM App. The user logs into the CRM. The Digital Co-Worker inherits their context. The "Manufacturing" database literally does not exist in this world.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the only "Lead" table it can see—the Sales Leads table. There is zero ambiguity.

The Outcome:

"The Lead 'Mercury Logistics' is in the 'Proposal' stage. The closing probability is 60%."

Scenario B: The User is in the Manufacturing App. The user logs into the production floor system.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the Safety Data Sheets.

The Outcome:

"Warning: Detected 'Lead' and 'Mercury' contamination in Lot #404. Status: Quarantine."

By restricting the context to the domain of the application, the hallucination becomes mathematically impossible. The Co-Worker cannot conflate data it cannot see.

The Best of Both Worlds: Federated Scalability

But what if you need data from both systems?

This is where Federated Identity Management (FIM) comes in. It acts as the trusted hallway between your apps.

If the Sales Director intentionally needs to know if "Mercury Logistics" has any outstanding safety violations that might block the deal, they can explicitly ask the Co-Worker to check.

The Co-Worker, using its FIM passport, "walks down the hall" to the Manufacturing App. It enters that new Micro-Ontology, performs the search in that context, and reports back.

This turns "Accidental Contamination" into "Intentional Discovery." It keeps the boundaries clear while still allowing for cross-domain intelligence.

The Verdict: Boring is Safe

The promise of a "Genius AI" that knows everything is a marketing fantasy that leads to expensive, fragile, and dangerous systems.

Enterprises don't need an AI that knows everything. They need an AI that knows where it is.

  • Global Brain: High Cost, High Risk, Unpredictable.
  • Digital Co-Worker: Low Cost, Zero Risk, Deterministic.

By embracing the "boring" architecture of isolated Micro-Ontologies, you don't just save money on tokens. You save yourself from the nightmare of explaining to a client why your AI called them toxic.

Labels: AI, Micro Ontology