Blog

How we unified the Visual Builder, the HATEOAS Debugger, and the Code Editor into a single interface for the Digital Workforce.

Labels
AI(12) AJAX(112) App Studio(10) Apple(1) Application Builder(245) Application Factory(207) ASP.NET(95) ASP.NET 3.5(45) ASP.NET Code Generator(72) ASP.NET Membership(28) Azure(18) Barcode(2) Barcodes(3) BLOB(18) Business Rules(3) Business Rules/Logic(140) BYOD(13) Caching(2) Calendar(5) Charts(29) Cloud(14) Cloud On Time(2) Cloud On Time for Windows 7(2) Code Generator(54) Collaboration(11) command line(1) Conflict Detection(1) Content Management System(12) COT Tools for Excel(26) CRUD(1) Custom Actions(1) Data Aquarium Framework(122) Data Sheet(9) Data Sources(22) Database Lookups(50) Deployment(22) Designer(178) Device(1) Digital Workforce(3) DotNetNuke(12) EASE(20) Email(6) Features(101) Firebird(1) Form Builder(14) Globalization and Localization(6) HATEOAS(3) How To(1) Hypermedia(2) Inline Editing(1) Installation(5) JavaScript(20) Kiosk(1) Low Code(3) Mac(1) Many-To-Many(4) Maps(6) Master/Detail(36) Micro Ontology(5) Microservices(4) Mobile(63) Mode Builder(3) Model Builder(3) MySQL(10) Native Apps(5) News(18) OAuth(9) OAuth Scopes(1) OAuth2(13) Offline(20) Offline Apps(4) Offline Sync(5) Oracle(11) PKCE(2) Postgre SQL(1) PostgreSQL(2) PWA(2) QR codes(2) Rapid Application Development(5) Reading Pane(2) Release Notes(185) Reports(48) REST(29) RESTful(30) RESTful Workshop(15) RFID tags(1) SaaS(7) Security(81) SharePoint(12) SPA(6) SQL Anywhere(3) SQL Server(26) SSO(1) Stored Procedure(4) Teamwork(15) Tips and Tricks(87) Tools for Excel(3) Touch UI(93) Transactions(5) Tutorials(183) Universal Windows Platform(3) User Interface(338) Video Tutorial(37) Web 2.0(100) Web App Generator(101) Web Application Generator(607) Web Form Builder(40) Web.Config(9) Workflow(28)
Archive
Blog
Thursday, November 13, 2025PrintSubscribe
The Three Modes of Rapid Agent Development

Building a Digital Workforce requires more than just a form designer. It requires a platform that allows you to see your application through multiple lenses: the eyes of the human user, the perspective of the frontend developer consuming your API, and the logic of the system architect.

In our upcoming release, the App Studio (the heart of the Code On Time platform) is evolving. The App Studio is already a state-of-the-art Rapid Application Development environment featuring live app UI inspection and the powerful App Explorer, now supercharged by the AI Tutor and Builder. We are introducing three distinct operational modes that make Rapid Agent Development (RAD) a reality.

Whether you are "vibe coding" a UI, debugging a state machine, or writing complex business logic, the Studio now adapts to your workflow.

1. App Mode: The Visual Builder

For the Human Interface.

This is the "Live Mode" our customers first experienced in the December 2024 release. You interact with your running application exactly as your end-users will.

  • Live Inspection: Click on any field, button, or grid in the running app to instantly locate it in the configuration hierarchy.
  • "Vibe Coding": Use the AI Builder to prompt changes ("Move the status field to the top," "Color code high-value orders") and watch them apply instantly to the live UI.
  • Purpose: This mode ensures that the "Invisible UI" (your Data Controllers) produces a beautiful, functional Visible UI for your human users.

2. API Mode: The Agent's View

For the AI Architect.

This is the killer feature for the agentic era. When you switch to API Mode, the "grids and forms" disappear. In their place, the App Studio renders a live, interactive "Documentation View" of your HATEOAS resources.

  • Visualizing the State Machine: Instead of guessing what the AI sees, you see it. The HATEOAS links (rel: "approve", rel: "cancel") are rendered as clickable actions.
  • Live Inspection: Just as you inspect UI elements in App Mode, you can inspect the API documentation. Click on any resource link or action in the documentation view to instantly locate its definition in the App Explorer.
  • Live Debugging: You can click through the API just like an agent (or a custom frontend developer) would. If a link is missing, you know immediately that a security rule or business logic filter has hidden it.
  • Zero-Code Interaction: You can follow the POST and PATCH links directly from the documentation with auto-complete support for JSON bodies, allowing you to test complex transactions without writing scripts.

A Note on Postman: Trust but Verify

API Mode is optimized for exploration and design. It does not replace industry-standard tools like Postman. In fact, it is the perfect companion.

While API Mode lets you see the "Agent's World" from the inside, Postman lets you verify it from the point of view of a frontend developer. Every Code On Time app comes with a built-in OAuth 2.0 Authorization Server. You can easily connect Postman (or any other similar tool) using the Authorization Code flow with PKCE to build custom front-ends, to simulate a truly external mobile app, or third-party integration. Use API Mode to build the logic; use Postman to prove it works for the outside world.

3. Workspace Mode: The Logic Lab

For the Deep Work.

Sometimes things break and you need to fix them. Or sometimes, you just need a focused environment to write code.

In the standard App and API modes, business rules and text properties open as modal popups over the live interface. This is great for quick edits, but for deep work, you need more space.

Workspace Mode decouples the development environment from the live application. It runs as a separate, stable web app on your local machine.

  • Permanent Context: The App Explorer and Business Rules are displayed as permanent tabs that remain open until you close them, rather than temporary modals.
  • The "Safe Haven": You can perform massive architectural changes that might temporarily break the runtime, all while having full access to your configuration tools.

Digital Workforce

No matter which mode you choose (App, API, or Workspace) the Digital Workforce is always available. You can invoke the Tutor for free help or the Builder for credit-powered automation at any time.

In App and API modes, the Tutor and Builder are a click away. In Workspace Mode, they get their own permanent tab, allowing you to keep a long-running "chat with your development workforce" open alongside your business logic and configuration trees.

Conclusion

Rapid Agent Development isn't just about generating code; it's about having the right visibility at the right time. With App Mode, you build for humans. With API Mode, you build for agents. With Workspace Mode, you get a traditional coding experience for deep logic that powers them both.

Welcome to the new standard of development.

image1.png

This is the"Switch Application View”.menu. The App Studio remembers your selection and will activate your preferred mode of application development. If the app is broken,then the “Workspace” mode is the only way to revive it.

image2.png

The screenshot displays the App Explorer following a live inspection of the "Supplier Company Name" column header. The image shows the attached hierarchy and properties side-by-side, with the "Label" property of the "ProductName" field selected. A brief description explains the property's purpose. Tabs within the title grant quick access to "Settings", “Data”, "Models", "Controllers", and "Pages". The right side of the title contains buttons for "Search", “Ask Builder”, "Display Hierarchy as Table", "Split Vertically", and "Close".

Wednesday, November 12, 2025PrintSubscribe
The Fractal Workflow: How AI Builds and Runs Your App

We have talked about the AI Builder for developers and the Digital Co-Worker for end-users. At first glance, these might seem like two different tools (one for coding, one for business).

But they are actually the same engine, running on the same logic.

At Code On Time, we have built a Fractal Architecture that repeats itself at design time and runtime. The way you build the app is exactly the way your users will use it.

The Developer's Loop (Design Time)

When you sit down with the AI Builder in App Studio, the workflow is clear:

  1. Prompt: You state a goal ("Create a Sales Dashboard").
  2. Plan: The Builder analyzes the metadata and presents a Cheat Sheet (a step-by-step plan of action).
  3. Approval: You review the plan. You are in the director's seat. You click "Apply All".
  4. Execution: The Builder operates the App Explorer (the "Invisible UI" of the Studio) to create pages, controllers, and views.
  5. Result: A new feature exists.

You are never locked into the AI. At any moment, you can take the wheel and work directly with the App Explorer. Because the AI Builder uses the exact same tools and follows the exact same tutorials as a human developer, its work is transparent and editable. You can use the Tutor to learn the ropes or the Builder to speed up the heavy lifting, but the manual controls are always at your fingertips

The User's Loop (Runtime)

When your user sits down with their Digital Co-Worker, the workflow is identical:

  1. Prompt: They state a goal ("Approve all pending orders").
  2. Plan: The Co-Worker analyzes the HATEOAS API (the metadata) and formulates a sequence of actions.
  3. Approval: The Co-Worker presents an Interactive Link or summary. "I found 5 orders. Please review and approve."
  4. Execution: The user clicks "Approve," or the Co-Worker executes the API calls directly if permitted.
  5. Result: The business process advances.

End-users have the same flexibility. They can interact with the standard rich user interface, or you can build a custom front-end powered by the HATEOAS API. The Co-Worker prompt is available everywhere: docked inside the app for context, or switched to fullscreen mode for a pure, chat-first experience. You can even configure the app to be 'Headless,' where users interact exclusively via the prompt, or remotely via Email and SMS using secure Device Authorization.

The Field Worker's Loop (Connection-Independent)

The fractal pattern extends to the very edge of the network. When your Field Workers operate in isolation, they aren't just viewing static pages; they are interacting with a complete, local instance of the application logic.

The Setup (Offline Sync): Before the loop begins, the Offline Sync component performs the heavy lifting. Upon login, it analyzes the pages marked as "Offline" and downloads their dependencies. It fetches the JSON Metadata (the compiled definitions of your Controllers) and the Data Rows (Suppliers, Products, Categories), storing them in the device's IndexedDB.

The Runtime Loop:

  1. Prompt: The user taps "New Supplier".
  2. Plan: The Touch UI framework detects the offline context. Instead of calling the server, it activates the Offline Data Processor (ODP). The ODP consults the Local Metadata (the cached JSON controller definition) to understand the form structure.
  3. Approval: The ODP generates the UI instantly. It alters the standard behavior to fit the local context: unlike online forms which require a server round-trip to establish IDs, the ODP renders the createForm1 view with the Products DataView immediately visible.
  4. Execution: Execution: The user enters the supplier name and adds five products to the child grid. The ODP simulates these operations in memory, enforcing integrity by validating the entire "Master + 5 Details" graph as a single unit before allowing the save.
  5. Result: The ODP bundles the master record and child items into a single Transaction Log sequence. It then updates the local state registry (OfflineSync.json) and persists the new data files to IndexedDB. This "checkpoint" ensures that even if the device loses power, the pending work is safe until the user taps Synchronize.

This proves that "Offline" is not just a storage feature; it is a full-fidelity Transactional Workflow powered by the exact same metadata that drives your AI and your Web UI.

The Creator's Loop (Runtime Build)

The fractal pattern goes one step deeper. In the Digital Workforce, the line between "User" and "Developer" blurs.

With Dynamic Data Collection, your business users can define new data structures (Surveys, Audits, Inspections) directly inside the running application, using the same logic you used to build it.

  1. Prompt: The user tells the Co-Worker: "Create a daily fire safety checklist for the warehouse."
  2. Plan: The Co-Worker (acting as a runtime Builder) generates the JSON definition for the survey, effectively "coding" a new form on the fly.
  3. Approval: The user reviews the structure in the Runtime App Explorer - a simplified version of the tool you use in App Studio.
  4. Execution: The definition is saved to the database (not code), instantly deploying the new form to thousands of offline users.
  5. Result: A new business process is materialized without a software deployment.

This proves that the Axiom Engine isn't just a developer tool; it is a ubiquitous creation engine available to everyone in your organization.

Powered by the Axiom Engine

This symmetry is not an accident. It is the Axiom Engine in action.

  • For the Developer: The Axiom Engine navigates the App Structure (Controllers, Pages) to build the software.
  • For the User: The Axiom Engine navigates the App Data (Orders, Customers) to run the business.

By learning to build with the AI, you are simultaneously learning how to deploy it. You aren't just coding; you are training the workforce of the future using the exact same patterns you use to do your job.

You Are the Director

In this fractal architecture, the role of the human (whether developer or end-user) shifts from "Operator" to "Director."

You are not being replaced; you are being promoted. The AI cannot do anything that isn't defined in the platform's "physics."

  • On the Build Side: The App Explorer is the boundary. The AI Builder cannot invent features that don't exist in the App Studio. It can only manipulate the explorer nodes that you can manipulate yourself.
  • On the Run Side: The HATEOAS API is the boundary. The AI Co-Worker cannot invent business actions that aren't defined in your Data Controllers. It can only click the links that you have authorized.
    • However, within that boundary, you have 100% Data Utility. Because the Agent sees exactly what you see, it can answer specific questions like "What is Rob's number?" immediately, provided you have permission to view that data.

The AI provides the labor, but you provide the intent. You direct the show, confident that the actors can only perform the script you wrote.

Labels: AI
Tuesday, November 11, 2025PrintSubscribe
Digital Co-Worker or Genius?

For over a decade, Code On Time has been the fastest way to build secure, database-driven applications for humans. The industry calls this Rapid Application Development (RAD). But recently, we realized that the rigorous, metadata-driven architecture we built for humans is also the perfect foundation for something much more powerful.

Today, we are announcing a shift in our vision. We are not just building interfaces for people anymore. We are evolving from a RAD tool for web apps into a RAD for the Digital Workforce. The same blueprints that drive your user interface are now the key to unlocking the next generation of autonomous, secure Artificial Intelligence.

The Digital Co-Worker (The "Glass Box")

Imagine an app that looks like Chat GPT. This app executes every prompt as if it is operating the "invisible UI" of your own database. Just like the human user, it inspects the menu options, selects data items, presses buttons, and makes notes as it goes. Then it reports back by arranging the notes into an easy-to-understand summary.

This is possible because a developer has designed the app with a real UI for your database. Both the DigitalI "Co-Worker" and the human UI are built from the exact same "blueprints" (called data controllers). These blueprints define the data, actions, and business logic for your application. When a user logs in (using their organization's existing security), the AI "digital employee" inherits your exact identity, meaning it sees only what you see and can only perform the actions available to you.

The AI "navigates" a system that has already been "security-trimmed" by user roles and simple, declarative SQL-based rules. This means if you aren't allowed to see "Salary" data, the AI is never shown the "Salary" option - it doesn't exist for that session. A "heartbeat" process allows these tasks to run 24/7, and the AI's "notes" (its step-by-step log) create a perfect, unchangeable audit trail of every decision it has made.

The Genius (The "Black Box")

Imagine another app that also looks like Chat GPT. To understand your database, this app employs a powerful, sophisticated AI model as its "brain". It operates by first consulting a comprehensive "manifest" - a detailed catalog of every "tool" and data entity it can access. This allows the AI to have a full, upfront understanding of its capabilities, so when you submit a prompt, it can process this entire catalog to create a complete, multi-step plan in a single "one-shot" operation.

This architecture is often built as a flexible, component-based system, which involves deploying several specialized services: one for the chat UI, another for the AI's "brain" (the orchestrator), and a dedicated "server" for each tool. Security is an explicit and granular consideration, requiring careful, deliberate configuration. Each tool-server's permissions must be managed, and the AI "brain" is trusted to orchestrate these tools correctly. This design allows for fine-tuning access (like "read/write all customer data") but means that security and prompt-based access must be actively managed and secured.

This "one-shot" planning model has a clear cost structure: the primary charge is for the single, complex "planning" call to the sophisticated "brain" model, which is required for every prompt. The success of the entire operation relies on the quality of this initial plan. If the AI's plan contains an error (for example, using incorrect database filter syntax) the operation may not complete as intended, and the cost of the "planning" call is incurred. This model prioritizes a powerful, upfront planning phase and depends on the AI's reasoning to be correct the first time.

How to Choose: The Auditable Co-Worker or the "Black Box" Genius

Your choice between the "Digital Co-Worker" and the "Genius" architecture is a strategic decision about what you value most: trust and durability or raw, unconstrained reasoning. The "Digital Co-Worker," built on the CoT framework, is an "invisible UI" operator. Its primary strength is its security-by-design. Because it inherits the user's exact, security-trimmed permissions, it is impossible for it to access data or perform actions it isn't allowed to. It operates within a "fenced-in yard" defined by your business rules. This makes it the perfect, auditable solution for the real-world workflows that require a quick response or need to run reliably for days or even months.

The "Genius" model, built on LLM+MCP, is a "one-shot" planner. Its primary strength is its power to reason over a massive, pre-defined database "map". It's designed for highly complex, one-time questions where the "planning" is the hardest part. This power comes at the cost of security and predictability; you are trusting a "black box" with a full set of tools, and its complex plans can be brittle, expensive, and difficult to audit. This model is best suited for scenarios where the sheer "intelligence" of the answer is more important than the security and durability of the process.

For a business, the choice is clear. The "Digital Co-Worker" is a platform you can build your entire company on. This is where it has a huge advantage: it can operate with a smart model for deep reasoning, but it also works perfectly with a fast, lightweight, and cheap model for 99% of tasks. The "Genius" model, by contrast, requires the most expensive model just to parse its complex manifest. Furthermore, the "Genius" model requires a massive upfront investment, potentially costing hundreds of thousands of dollars in custom development, integration, and security engineering before the first prompt is ever entered. The "Digital Co-Worker" platform, with its "BYOK" model and 100 free digital co-workers, makes it a risk-free, frictionless way to adopt a true workforce multiplier.

The Digital Co-Worker is Not a Chatbot

It is easy to mistake the "Digital Co-Worker" for a chatbot because they both speak your language. However, the difference is fundamental. As industry experts note, standard chatbots are "all talk and no action." They are engines of prediction, trained to guess the next word in a sentence based on frozen knowledge from the past. They can summarize a meeting or write a poem, but they are fundamentally passive observers that cannot touch your business operations.

The Digital Co-Worker is different because it is agentic. It is defined not by what it says, but by its ability to take actions autonomously on a person's behalf. When you give a chatbot a task, it tells you how to do it. When you give a Digital Co-Worker a task, it does it. It acts as an "autonomous teammate," capable of breaking down a high-level goal (like "review all pending orders and expedite shipping for anything delayed by more than two days") into a series of concrete steps and executing them without needing you to hold its hand.

This distinction changes the return on investment entirely. A chatbot is a tool for drafting text; a Digital Co-Worker is a tool for finishing jobs. It doesn't just help you draft an email to a client; it finds the client in the database, checks their order status, drafts the response, and with your permission, sends it. It moves beyond conversation into orchestration, bridging the gap between your intent and the complex reality of your database transactions.

The Co-Worker's "Glass Box": A Look Inside the HATEOAS State Machine

The "AI Co-Worker" operates by acting as a "digital human," using the application's REST Level 3 (HATEOAS) API as its "invisible UI." The entire process is driven by a built-in State Machine (SM). When a prompt is submitted, the SM's "heartbeat" processor wakes up. Its only "worldview" is the HATEOAS API response. It uses a fast, lightweight LLM (like Gemini Flash) to read the _links (the "buttons") and hints (the "tooltips") to decide the next logical step. As it works, it "makes notes" in its state_array, which serves as both its "memory" and a perfect, unchangeable audit log. This is how it auto-corrects: if an API call fails, the API returns the error with the _schema, which is just the next "note" in the log, allowing the AI to build a correct query in the next iteration.

This "glass box" model is inherently secure. The HATEOAS API is not a static catalog; it is "security-trimmed" by the server before the AI ever sees it. The app's engine uses declarative rules (like SACR) to filter the data and remove links to any actions the user isn't allowed to perform. If you don't have permission to "Approve" an order, the Digital Co-Worker will not see an "approve" link. The guardrails are not a suggestion; they are an architectural-level boundary, making it impossible for the AI to go rogue.

This architecture also enables true, durable autonomy. The "heartbeat" that runs the SM is designed to handle tasks that last for months. A user can "pause" or "resume" an agent simply by issuing a new prompt, as the AI can see and follow the pause link on its own "task" resource. Because the AI can also discover links to create new prompts (e.g., rel: "create_new_prompt" in the menu), a "smart" agent can decompose a complex prompt ("review 500 contracts") into 500 "child" tasks, which the heartbeat then patiently executes in parallel.

Beyond the Database: The Universal Interface

The power of the Digital Co-Worker extends far beyond the SQL database. The same "blueprints" (data controllers) that define your customer tables can also define "API Entities" (virtual tables that connect to external systems like SharePoint, Google Drive, or third-party CRMs).

To the AI, these external sources look exactly like the rest of the "invisible UI." It doesn't need to learn a new API, manage complex keys, or navigate different security protocols. It simply follows a link to "Documents" or "Spreadsheets" in its menu, and the application's engine handles the complex connection logic behind the scenes, presenting the external data as just another set of rows and actions.

This solves the single hardest problem in enterprise AI: secure access to unstructured data. Just like with the database, the system applies declarative security rules to these external sources. If a user is only allowed to see SharePoint files they created, the Digital Co-Worker will only discover those specific files. It enables a secure, federated search and action capability (allowing the AI to "read" a contract PDF and "update" a database record in one smooth motion) without ever exposing the organization's entire document repository to a "black box."

The Future is Built-In: Rapid Agent Development

The age of the expensive, brittle "Genius" AI is ending. The age of the secure, durable "Digital Co-Worker" has arrived. We believe that building a Digital Workforce shouldn't require a team of data scientists and six months of integration; it should be a standard feature of your application platform.

In our upcoming releases, we are delivering the tools to make this a reality. By simply building your application as you always have, you will be simultaneously architecting the secure, HATEOAS-driven environment where your Digital Co-Workers will live and work, powered by the Axiom Engine. Your database is ready to talk. Stay tuned for our updated roadmap - the workforce is coming under the full control and permission of the human user.

Labels: AI, RESTful