RUAL Documentation

Block Execution

Understand how blocks execute within blueprint flows: dataflow between blocks, parallel execution of branches, error handling patterns, and debugging techniques.

Dataflow Between Blocks

In RUAL, data moves between blocks through pin connections. Each block can have input pins (in-pins) and output pins (out-pins). When you connect an out-pin of one block to an in-pin of another, you establish a data pathway. The system uses two distinct types of connections:

  • Flow pins: These control when a block executes. A block with a flow in-pin will only execute when it receives the flow signal from a preceding block.
  • Data pins: These carry typed data (value, number, condition, object, array, date, file, query, mutations, etc.) between blocks. Data pins define what information a block receives or produces.

Blocks that do not have a flow in-pin — such as value_default, condition_is_true, or query_bool_filter — are evaluated when the block they connect to needs their data. This means data-only blocks are resolved on demand as part of the execution chain.

Execution Order

Execution follows the flow pin connections. When a flow starts — for example from a trigger_custom_function block — it sends a flow signal to the next connected block. That block executes, then passes the flow signal onward.

When a block has multiple flow out-pins, or when one flow out-pin connects to multiple blocks, the resulting branches execute in parallel. This means RUAL does not wait for one branch to finish before starting another.

Parallel Execution

When a flow splits into multiple paths, all branches execute simultaneously. If you need to ensure a specific execution order, connect your blocks sequentially through a single flow path. For operations that must not run concurrently, consider using lock & wait blocks.

Precompiled Data

To ensure blocks operate at peak efficiency, RUAL pre-builds the output values of blocks in advance. This optimized data is called precompiled data. Each time a blueprint is saved, all production nodes discard their existing precompiled data. If a blueprint has not been executed on a node for at least 6 hours, the node automatically clears its precompiled information to conserve memory.

For best performance, reuse the same block for identical values within a blueprint — more connections to a single block are more efficient than duplicating blocks with the same value. When calling functions, only pass the data you actually need (for example, just a guid instead of an entire object).

Error Handling

When a block encounters an error, the flow does not stop. Instead, most blocks that can fail provide specific output pins to communicate the result:

  • success — A condition pin that returns true if the operation succeeded and false if it failed.
  • error — A value pin that contains the error message when the operation fails.

It is the developer's responsibility to check these pins and decide how to handle errors. Common patterns include:

  • Checking the success pin and showing an error message in the UI.
  • Logging errors to storage or sending them to an external webhook (for example, an email notification).
  • Using a condition block to branch the flow based on whether the operation succeeded.
Example Error Output
{ "success": false, "error": "Document not found for the given GUID" }
No try/catch equivalent Unlike traditional programming, RUAL does not have a try/catch mechanism. Each block handles its own errors and exposes the result through its output pins. Always check the success and error pins on blocks that perform storage operations, HTTP requests, or other operations that can fail.

Debugging Blueprint Execution

RUAL provides several tools for debugging your blueprint flows:

Simulation Mode

Certain blocks, particularly those at the beginning of a flow, feature a play icon. Clicking this icon opens simulation options where you can set custom input values and execute the flow in simulation mode. The simulation traces through the connected blocks, showing you the output at each step. This is particularly useful for end-stage debugging.

Simulation Guide Learn how to configure simulation values for different pin types.

Production Run

When you select Production Run within the play mode options, it executes the flow using the non-deployed (development) blueprint but against production data. This allows you to debug with real data without affecting the live production environment. Note that the Production Run permission can be disabled for specific users in User Access Management.

Console

The blueprint console can be activated through the Options menu in the top right of your blueprint. The console shows execution details and can help identify issues in your flows.

Audit Log

Every blueprint modification is automatically logged. You can view the audit log through the Options menu or by clicking on a specific block's options () and selecting Revisions to see changes affecting only that block.

Asynchronous Patterns

While flows execute their blocks based on flow pin connections, there are several ways to offload work to run asynchronously:

Pattern Use Case
Queue Offload computationally intensive tasks (like PDF generation) to background processing. The queue executes a function at a specified time, potentially on another node. Use the function_custom_execute_from_queue block.
Storage Events React to document changes (create, update, remove) in a specific storage. These events execute asynchronously after the document operation completes, on a FIFO basis. Use the storage_event block.
Repeating Events Schedule recurring tasks that execute at a specified interval. Similar to setInterval() or crontab. Use the schedule_repeating_event block.
Important The function_custom_execute block executes a function and waits for it to complete before continuing the flow. If you want to run something in the background without waiting, use the queue instead.

Locking and Concurrency

When multiple flows could modify the same data simultaneously, you can use locking blocks to prevent race conditions:

Note that while the storage system itself is transaction-based (updates are processed sequentially per document), locking is useful when you need to ensure a broader set of operations runs atomically.

Best Practices

  • Keep flows focused: Each flow should have a single responsibility. Use functions to break complex logic into smaller, reusable pieces.
  • Use namespaces: When managing many functions across blueprints, use the namespace feature to organize them clearly.
  • Check error pins: Always handle the success and error pins on blocks that interact with storage, external APIs, or other operations that can fail.
  • Offload heavy work: Use the queue for tasks like PDF generation, email sending, or batch processing that don't need to complete immediately.
  • Use caching: Leverage Redis cache to avoid repeated storage queries. See Common Pitfalls for guidance on proper cache usage.