Blog

Introducing Scribe Mode: The Digital Consultant that listens, thinks, and builds your application at the speed of conversation.

Labels
AI(11) AJAX(112) App Studio(10) Apple(1) Application Builder(245) Application Factory(207) ASP.NET(95) ASP.NET 3.5(45) ASP.NET Code Generator(72) ASP.NET Membership(28) Azure(18) Barcode(2) Barcodes(3) BLOB(18) Business Rules(3) Business Rules/Logic(140) BYOD(13) Caching(2) Calendar(5) Charts(29) Cloud(14) Cloud On Time(2) Cloud On Time for Windows 7(2) Code Generator(54) Collaboration(11) command line(1) Conflict Detection(1) Content Management System(12) COT Tools for Excel(26) CRUD(1) Custom Actions(1) Data Aquarium Framework(122) Data Sheet(9) Data Sources(22) Database Lookups(50) Deployment(22) Designer(178) Device(1) Digital Workforce(3) DotNetNuke(12) EASE(20) Email(6) Features(101) Firebird(1) Form Builder(14) Globalization and Localization(6) HATEOAS(3) How To(1) Hypermedia(2) Inline Editing(1) Installation(5) JavaScript(20) Kiosk(1) Low Code(3) Mac(1) Many-To-Many(4) Maps(6) Master/Detail(36) Micro Ontology(4) Microservices(4) Mobile(63) Mode Builder(3) Model Builder(3) MySQL(10) Native Apps(5) News(18) OAuth(9) OAuth Scopes(1) OAuth2(13) Offline(20) Offline Apps(4) Offline Sync(5) Oracle(11) PKCE(2) Postgre SQL(1) PostgreSQL(2) PWA(2) QR codes(2) Rapid Application Development(5) Reading Pane(2) Release Notes(185) Reports(48) REST(29) RESTful(30) RESTful Workshop(15) RFID tags(1) SaaS(7) Security(81) SharePoint(12) SPA(6) SQL Anywhere(3) SQL Server(26) SSO(1) Stored Procedure(4) Teamwork(15) Tips and Tricks(87) Tools for Excel(3) Touch UI(93) Transactions(5) Tutorials(183) Universal Windows Platform(3) User Interface(338) Video Tutorial(37) Web 2.0(100) Web App Generator(101) Web Application Generator(607) Web Form Builder(40) Web.Config(9) Workflow(28)
Archive
Blog
Monday, November 17, 2025PrintSubscribe
Feature Spotlight: Meet the Scribe

The "Digital Consultant" That Listens, Thinks, and Builds.

We are thrilled to introduce Scribe Mode, the newest persona in the App Studio. If the Tutor is your teacher and the Builder is your engineer, the Scribe is your silent partner in the room.

For years, consultants and architects have faced the same friction: the "Translation Gap." You spend an hour brilliantly brainstorming requirements with a client, but when the meeting ends, you are left with messy notes and a blank screen.

The Scribe eliminates the blank page. It acts as a Prompt Compiler, listening to your conversation, filtering out the noise, and constructing the application in the background while you talk.

How It Works: The "Clarity Gauge"

The Scribe isn't just a voice recorder; it is a Real-Time Requirements Engine.

  1. Ambient Listening: Switch to "Scribe Mode" and hit Record. The Scribe uses your browser’s native speech engine to transcribe the meeting in real-time.
  2. The Director’s Remark: Need to steer the AI? You can type "corrections" or "technical specifics" directly into the chat buffer while the recording continues. The Scribe treats your typed notes as high-priority instructions to override or clarify the spoken text.
  3. The Ephemeral Cheat Sheet: When you pause to think (or hit Stop), the Scribe analyzes the conversation against your app’s live metadata. It generates a Cheat Sheet—a proposed plan of action.
    • The Magic: If you keep talking, the Cheat Sheet vanishes and rebuilds. It is a living "Clarity Gauge." If the Cheat Sheet looks right, you know the AI (and the room) is aligned. If it looks wrong, you just keep talking to fix it.
  4. Instant Materialization: Click "Apply All," and the App Studio executes the plan. By the time your client returns from a coffee break, the features you discussed are live in the Realistic Model App (RMA).

The Scribe turns "Talk" into "Software" at the speed of conversation.

For Our Consultants and Partners: Your New Superpower

If you build apps for clients, Scribe Mode is your new competitive advantage. It transforms you from a note-taker into an Architect who delivers results in the room.

  • Win the Room: Don't take notes; take action. Use Scribe Mode during the discovery meeting to generate a working prototype in real-time. Show your client the software they asked for before the meeting ends.
  • Instant Proposals: Use the Builder to generate a technical SRS (Software Requirements Specification) and LOE (Level of Effort) estimation based on the meeting transcript. Turn a 30-minute chat into a professional proposal instantly.
  • The "Wizard" Effect: You remain the expert. The Scribe handles the typing, configuration, and schema design, freeing you to focus on strategy and client relationships.

Choose Your Partner: Tutor vs. Builder vs. Scribe

Feature

The Tutor

The Builder

The Scribe

Role

The Mentor (Teacher)

The Engineer (Maker)

The Silent Partner (Listener)

Best For...

Learning "How-to," navigating the studio, and troubleshooting errors.

Executing complex tasks, generating schemas, and building specific features instantly.

Stakeholder meetings, "rubber duck" brainstorming, and capturing requirements in real-time.

Interaction

Conversational: Ask questions like "How do I filter a grid?"

Directive: Give commands like "Create a dashboard for Sales."

Ambient: Runs in the background. Listens to voice (mic) or accepts unstructured notes.

Input Type

Natural Language Questions.

Precise Instructions & Prompts.

Stream of Consciousness (Voice or Text) + "Director's Remarks."

Output

Explanations + Navigation Pointers (guides you to the screen).

Cheat Sheet with executable steps + "Apply All" button.

Ephemeral Cheat Sheet (Self-correcting plan) + "Apply All" button.

Context

Knows the documentation (Service Manual) and your current screen location.

Knows your entire project structure (Schema, Controllers, Pages) to generate valid code.

Knows your project structure + synthesizes the entire conversation history into a final plan.

Cost

Free (Included in all editions).

Paid (Consumes Builder Credits).

Paid (Consumes Builder Credits for synthesis).

Summary: Which Mode Do I Need?

  • Use Tutor when you want to do it yourself or do not want to consume credits, but need a map.
  • Use Builder when you know what you want and want the AI to do it for you. Requires Builder Credits.
  • Use Scribe when you are figuring it out with a client or team and want the app to materialize as you speak. Requires Builder Credits.
Friday, November 14, 2025PrintSubscribe
The "Mercury" Incident: Why the Global AI Brain is Dangerous

Imagine the scene. A top Sales Director at a major industrial firm opens their AI assistant. They are looking for a critical status update on a potential new client, "Mercury Logistics."

The Director types: "What is the current status of the Mercury Lead?"

The AI pauses for a moment, its "thinking" animation spinning, and then replies with supreme confidence:

"The Mercury Lead is currently unstable and highly toxic. Safety protocols indicate a high risk of contamination during the negotiation phase. Recommend immediate containment protocols."

The Sales Director stares at the screen in horror. Did they just tell the AI to treat a high-value client like a biohazard?

What happened?

The AI didn't break. It did exactly what it was designed to do. It acted as a "Global Brain," searching the company's entire centralized Data Lake for the keywords "Mercury" and "Lead."

The problem was that the company also has a Manufacturing Division that uses the chemical elements Mercury (Hg) and Lead (Pb) in production testing. The AI, lacking context, conflated a "Sales Lead" with a "Heavy Metal," resulting in a catastrophic hallucination.

This is the "Mercury" Incident—a perfect example of why the industry's obsession with monolithic, all-knowing AI systems is a dangerous dead end for the enterprise.

The Problem with the "Genius" Model (The Global Ontology)

The current trend in enterprise AI is to build a "Genius." The promise is seductive: "Dump all your data—from Salesforce, SAP, Jira, and SharePoint—into one massive Vector Database or Data Lake. The AI will figure it out."

This creates a Global Ontology—a unified, but deeply confused, view of the world.

The Failure Mode: Semantic Ambiguity

The root cause of the "Mercury" Incident is Semantic Ambiguity. In a global context, words lose their meaning.

  • In Sales, "Lead" means a potential customer.
  • In Manufacturing, "Lead" means a toxic metal.
  • In HR, "Lead" means a team manager.

When you force an AI to reason over all of these simultaneously, you are inviting disaster. The AI has to guess which definition applies based on subtle clues in your prompt. If it guesses wrong, it hallucinates.

The Hidden Cost: Token Bloat

To fix this, developers have to engage in "Prompt Engineering," feeding the model thousands of words of instructions: "You are a Sales Assistant. When I say 'Lead', I mean a customer, NOT a metal. Ignore data from the Manufacturing database..."

This is expensive. Every time you send that massive instruction block, you are paying for thousands of tokens, slowing down the response, and praying the model doesn't get confused anyway.

The Solution: The "Employee" Model (The Micro-Ontology)

There is a better way. It’s boring, it’s safe, and it mimics how human organizations actually work.

When you walk into a Hospital, you don't ask the receptionist for a pizza quote. You know by the context of the building that you are there for medical issues.

Code On Time applies this same logic to AI through the concept of the Digital Co-Worker and the Micro-Ontology.

Standing in the Right Room

Instead of a single "Global Brain," Code On Time builds a Society of Apps.

  • You have a CRM App.
  • You have a Manufacturing App.
  • You have an HR App.

Each app defines its own universe through a Micro-Ontology, delivered automatically via its HATEOAS API.

Crucially, this isn't a cryptic technical schema. The API entry point faithfully reproduces the Navigation Menu of the visible UI, complete with the same human-friendly labels and tooltips. This places the Co-Worker on the exact same footing as the human user.

Because the AI reads the exact same map as the human, it doesn't need to be "trained" on how to use the app. It just looks at the menu and follows "Sales Leads" because the tooltip says "Manage potential customers."

The Mercury Incident: Solved

Let's replay the scenario with a Code On Time Digital Co-Worker.

Scenario A: The User is in the CRM App. The user logs into the CRM. The Digital Co-Worker inherits their context. The "Manufacturing" database literally does not exist in this world.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the only "Lead" table it can see—the Sales Leads table. There is zero ambiguity.

The Outcome:

"The Lead 'Mercury Logistics' is in the 'Proposal' stage. The closing probability is 60%."

Scenario B: The User is in the Manufacturing App. The user logs into the production floor system.

The Prompt: “What is the current status of the Mercury Lead?”

The Action: The Co-Worker queries the Safety Data Sheets.

The Outcome:

"Warning: Detected 'Lead' and 'Mercury' contamination in Lot #404. Status: Quarantine."

By restricting the context to the domain of the application, the hallucination becomes mathematically impossible. The Co-Worker cannot conflate data it cannot see.

The Best of Both Worlds: Federated Scalability

But what if you need data from both systems?

This is where Federated Identity Management (FIM) comes in. It acts as the trusted hallway between your apps.

If the Sales Director intentionally needs to know if "Mercury Logistics" has any outstanding safety violations that might block the deal, they can explicitly ask the Co-Worker to check.

The Co-Worker, using its FIM passport, "walks down the hall" to the Manufacturing App. It enters that new Micro-Ontology, performs the search in that context, and reports back.

This turns "Accidental Contamination" into "Intentional Discovery." It keeps the boundaries clear while still allowing for cross-domain intelligence.

The Verdict: Boring is Safe

The promise of a "Genius AI" that knows everything is a marketing fantasy that leads to expensive, fragile, and dangerous systems.

Enterprises don't need an AI that knows everything. They need an AI that knows where it is.

  • Global Brain: High Cost, High Risk, Unpredictable.
  • Digital Co-Worker: Low Cost, Zero Risk, Deterministic.

By embracing the "boring" architecture of isolated Micro-Ontologies, you don't just save money on tokens. You save yourself from the nightmare of explaining to a client why your AI called them toxic.

Labels: AI, Micro Ontology
Friday, November 14, 2025PrintSubscribe
The Shortest Path to Agentic AI is the App You Already Know How to Build

Every CIO is currently asking the same question: "Which of our processes are addressable by AI?"

The answer is usually: "All of them, but it will cost millions to build the custom agents to do it."

The industry has convinced us that building AI Agents requires a completely new stack: Vector Databases, Python orchestration, RAG pipelines, and complex prompt engineering. For the average developer, this looks like a mountain of new skills to learn and a minefield of security risks to manage.

But what if the perfect environment for an AI Agent wasn't a vector database? What if it was the Traditional Web Application you have been building for decades?

The Digital Workforce Platform makes it possible.

Good for Humans = Good for AI

At Code On Time, our philosophy is simple: If an application is easy for a human to understand, it is easy for an AI to operate.

Think about what makes a "Traditional App" good for a human user:

  1. Clear Navigation: A menu structure that groups related tasks.
  2. Structured Data: Grids that allow filtering, sorting, and finding specific records.
  3. Safety Rails: Forms that validate input and buttons that only appear when an action is allowed.

It turns out, these are the exact same things an AI Agent needs to function reliably.

How the Platform Works: The "Invisible" Translation

When you build a standard database application with the Builder Edition, you aren't just drawing screens for people. You are constructing the Axiom Engine's map.

  1. You Build for Humans: You connect your database (SQL Server, Oracle, etc.). You arrange the navigation. You define "Friendly" labels for your cryptic columns. You set up the security roles.
  2. We Build for Agents: The Axiom Engine automatically projects your human UI into a HATEOAS API (the "Invisible UI").
    • Your Menu becomes the Agent's "Map."
    • Your Grids become the Agent's "Search Engine."
    • Your Forms become the Agent's "Toolbox."
    • Your Security Roles become the Agent's "Guardrails."

You don't need to "teach" the AI how to approve an order. You just build the "Approve" button for your human manager. The Axiom Engine automatically teaches the Digital Co-Worker how to press it.

Inherently Secure by Design

In most "Agentic" architectures, the AI is given a "Master Key" (a service account) to the database, and the developer has to write complex prompts to tell it what not to do. This is a recipe for disaster.

Code On Time flips this model. The Digital Co-Worker doesn't run as a "System Administrator"; it runs as the user.

  • Identity Inheritance: If a Sales Representative logs in and asks the Co-Worker for help, the Agent inherits that Rep’s exact identity, roles, and access tokens.
  • Zero Trust Navigation: The Agent cannot "hallucinate" a path to sensitive HR data because the HATEOAS API simply does not render those links for that user. It isn't just that the Agent shouldn't click them; it's that the Agent cannot see them.

This means your existing security investment (your roles, your permissions, and your Static Access Control Rules) instantly applies to your AI workforce. You don't need to build a separate "AI Firewall"; your application is the firewall.

No New Skills Required

This approach solves the "AI Skills Gap" overnight.

  • You don't need to be a Prompt Engineer. You have the AI Tutor and AI Builder inside the App Studio to help you configure the app using natural language. If you are an experienced Code On Time developer, you can operate the brand new App Explorer on day one, as you already know your way around.
  • You don't need to be a Security Expert. Since the Agent runs as the user, it inherits the exact same security rules you defined for the human. It physically cannot see data or perform actions that the user couldn't do themselves.
  • You don't need a massive budget. You are running on your own infrastructure ("localhost" or on-prem server), connecting to your own database, using your own LLM key. There is no "middleman tax."

Scalable by Default

The "Traditional App" model is inherently scalable because it is stateless. Whether you have one human user or 100 Digital Co-Workers hitting your database, the application server handles the load using standard, proven web technologies.

Because the Agent is "browsing" your app just like a human (but faster), you can debug it just like a human. If the Agent gets stuck, you don't need to inspect a neural network; you just switch to API Mode in the App Studio and see exactly which link was missing.

Any Model, Any Budget

Because the Axiom Engine (a built-in component of your app) provides such a structured, deterministic environment, you are not locked into using the most expensive "Genius" models for every task.

In a typical "Genius" architecture, you need a massive model (like GPT-4o or Gemini 2.5 Pro) just to understand the chaotic environment. In the Digital Workforce Platform, the environment is already organized.

  • Use Cheap Models for Speed: Because the HATEOAS API provides clear "Next Step" links, smaller, faster, and cheaper models (like Gemini Flash) can navigate your app with incredible accuracy and speed.
  • Use Smart Models for Reasoning: Save the expensive models for complex analysis, while letting the cheap models handle the navigation and data entry.
  • Bring Your Own Key (BYOK): You choose the model provider. Whether it’s OpenAI, Google, or Anthropic, the platform adapts. You are never locked into a vendor, and you benefit immediately as model prices drop.
  • Role-Based Governance: You define the rules of engagement. In the app settings, you can assign premium models to high-value roles (like "Managers") while restricting broader roles to cost-effective models. You can even set hard limits on cost-per-prompt and execution time to ensure your AI budget never breaks.

Conclusion

The industry is trying to reinvent the wheel by building complex "Agentic Frameworks" from scratch. We believe the wheel was already working fine.

By treating your Traditional Application as the source of truth, Code On Time turns your existing development team into AI Architects. You don't need to rebuild your business for AI. You just need to give your database a voice.

Download the Builder Edition today, connect your database, and meet your new Co-Workers.