Visualization Tools

Visualization tools for inspecting LNN state and learned logic.

jlnn.utils.visualize.plot_gate_weights(weights: Array, input_labels: List[str], gate_name: str = 'Gate', show: bool = True) Figure[source]

Generates a heatmap to visualize the learned importance of inputs for a specific gate.

In Logical Neural Networks, weights (constrained to w >= 1.0) act as attention mechanisms over logical antecedents. A higher weight indicates that the corresponding input has a stronger influence on the gate’s activation and the overall truth value of the formula.

Parameters:
  • weights – A JAX array of trained weights from the gate module.

  • input_labels – Symbolic names of the input predicates (e.g., from metadata).

  • gate_name – The label of the logical gate being inspected (e.g., ‘WeightedAND_1’).

  • show – If True, displays the plot immediately. Set to False for programmatic use or automated testing.

Returns:

The matplotlib Figure object containing the heatmap.

jlnn.utils.visualize.plot_training_log_loss(losses: List[float], title: str = 'Training Convergence', show: bool = True) Figure[source]

Plots the loss curve to visualize the optimization and logical grounding process.

In Logical Neural Networks, the loss trajectory reflects how well the model is satisfying logical constraints while fitting the data. Monitoring this convergence is crucial for identifying ‘over-constrained’ models or oscillations caused by conflicting logical rules.

Parameters:
  • losses – A list or array of loss values recorded during training epochs.

  • title – Descriptive title for the plot (e.g., ‘Convergence: XOR Problem’).

  • show – If True, invokes the backend’s display (GUI or Notebook inline). Set to False for automated reporting or background processing.

Returns:

The matplotlib Figure object representing the convergence visualization.

jlnn.utils.visualize.plot_truth_intervals(intervals_dict: Dict[str, Array], title: str = 'JLNN Truth Intervals', show: bool = True) Figure[source]

Renders a horizontal bar chart of truth intervals for model state inspection.

In Logical Neural Networks, truth is represented by an interval [L, U]. This visualization maps these intervals to horizontal bars: - The left edge represents the Lower bound (necessary truth). - The right edge represents the Upper bound (possible truth). - The width of the bar indicates uncertainty (ignorance). - A collapsed bar (L ≈ U) represents a precise classical truth value.

The function automatically performs a consistency check: if L > U, the bar is rendered in red to indicate a ‘Logical Contradiction’, signifying that the network has reached an unsatisfiable state where evidence for truth exceeds evidence for possibility.

Parameters:
  • intervals_dict – Dictionary mapping symbolic names (predicates/gates) to JAX arrays of shape (2,) or (batch, 2). If batched, the first sample is typically visualized.

  • title – Title of the plot, identifying the model or inference step.

  • show – If True, calls plt.show(). Disable this for automated testing or when further figure manipulation is required.

Returns:

The matplotlib Figure object for further customization or logging.

Graphical tools for visual audit of the model. JLNN emphasizes that users should see not only the result, but also the “space of doubt”.

Visualization of Intervals

Function plot_truth_intervals draws horizontal graphs where consistent states (blue) are distinguished from logical contradictions (red).

Analysis of Weights

Function plot_gate_weights allows to visualize the importance of individual inputs for a specific logical decision (e.g., which sensors most influence the rule for an alarm).