Contents Menu Expand Light mode Dark mode Auto light/dark, in light mode Auto light/dark, in dark mode Skip to content
JLNN 0.1.1.post1 documentation
JLNN 0.1.1.post1 documentation

User Guide:

  • Installation
  • Quickstart
  • Examples & Tutorials
    • Introductory Example: JLNN Base
    • Base Example: Basic inference and manual grounding
    • Basic Boolean Gates
    • Weighted Rules & Multiple Antecedents
    • Temporal Logic (G, F, X) on Time-Series
    • Contradiction Detection & Model Repair
    • Model Export & Deployment (StableHLO, ONNX, PyTorch)
    • Real Example: Iris dataset Classification
    • Meta-Learning & Self-Reflection
    • The Grand Cycle: Autonomous Tuning
    • Differentiable Reasoning on Graphs (JLNN vs. PyReason)
    • JLNN Explainability – From scales to symbolic rules
    • Bayesian JLNN: Logic in an Uncertain World
    • Neuro-Symbolic Bayesian GraphSAGE + JLNN
    • JLNN – Accelerated Interval Logic
    • Quantum Logic and Bell Inequalities with JLNN
    • LLM Rule Refinement (The Grand Cycle)
    • JLNN: Temporal Symbolic GNN for Pneumatic Digital Twin
    • JLNN + Knowledge Graphs: RAG-like Reasoning over FB15k-237
  • Theoretical foundations of JLNN
  • Testing

API Reference

  • Core Logic Engine (jlnn.core)
    • Activation Functions
    • Interval Arithmetic
    • Logical Kernels
  • Model Export & Deployment (jlnn.export)
    • Metadata & State Export
    • ONNX Export
    • StableHLO Integration
    • PyTorch Mapping
  • Neural Network Components (jlnn.nn)
    • Base Logical Elements
    • Parameter Constraints
    • Functional Logic Kernels
    • Logical Gates (Stateful)
    • Learned Predicates
  • Reasoning & Inference Engine (jlnn.reasoning)
    • JAX Execution Engine
    • Reasoning Inspector & Audit
    • Temporal Logic Operators
  • Model Storage & Persistence (jlnn.storage)
    • Model Checkpoints
    • Model Metadata
  • Symbolic Front-end (jlnn.symbolic)
    • Formula Parser
    • JLNN Compiler
    • Graph & NetworkX Integration
  • Training & Optimization (jlnn.training)
    • Logical Loss Functions
    • Projected Optimizers
  • Utilities & Visualization (jlnn.utils)
    • Helper Functions
    • Logical Metrics
    • Visualization Tools
    • Xarray Integration

About the Project:

  • License
  • Contributing to JLNN
  • Changelog
Back to top
View this page

Training & Optimization (jlnn.training)¶

This module contains tools for training logical networks. JLNN uses specific loss functions (Loss Functions) that penalize logical contradictions, and optimizers that guarantee adherence to LNN axioms.

  • Logical Loss Functions
    • contradiction_loss()
    • jlnn_learning_loss()
    • logical_consistency_loss()
    • logical_mse_loss()
    • rule_violation_loss()
    • total_lnn_loss()
    • Key Functions:
  • Projected Optimizers
    • ProjectedOptimizer
    • Mechanism of operation:
Next
Logical Loss Functions
Previous
Graph & NetworkX Integration
Copyright © 2026, Ing. Radim Közl
Made with Sphinx and @pradyunsg's Furo