Skip to the content.

pulsarfitpy Technical Information

Understanding the PulsarPINN Class

[NOTE] Some of the class inputs involve specific parameters to be accessed by psrqpy. Refer to the psrqpy documentation and the legend of ATNF parameters if needed.

To implement this 1D theoretical model into our library, a proper overview of the PulsarPINN class is detailed here.

The class requires the following parameters:

PulsarPINN Methods

The core methods of the pulsarfitpy framework is as follows:

1. .train(epochs=3000, training_reports=100, physics_weight=1.0, data_weight=1.0):

Trains the PINN model by using specified number of epochs and weight values.

Inputs:

Outputs:

2. .predict_extended(extend=0.5, n_points=300):

Generates model solutions over a range beyond the dataset to capture trends of the differential equation. Primarily used for exploring pulsar dynamics beyond given parameter ranges from the ATNF Catalogue.

Inputs:

Outputs:

3. .evaluate_test_set(verbose=True):

Computes evaluation metrics and splits during training of the PINN. Used to determine how well the model has trained and how accurate the solutions are.

Inputs:

Outputs:

4. .store_learned_constants()

Retrieves the learned physical constant values from the trained model. Used to extract final results after training for future experiments.

Outputs:

5. .set_learn_constants(new_constants)

Updates or adds learnable constants with new initial values mid-workflow, and reinitializes the model to include new parameters

Inputs:

6. .bootstrap_uncertainty(n_bootstrap=100, sample_fraction=0.8, epochs=1000, confidence_level=0.95, verbose=True):

Estimates uncertainty during model training through bootstrap iterations. Randomly samples training data, retrains model, and records learned values across multiple iterations of the program.

Inputs:

Outputs:

7. .monte_carlo_uncertainty(n_simulations=1000, noise_level=0.01, confidence_level=0.95, verbose=True)

Another method to test uncertainty by adding Gaussian noise for data inputs and model re-evaluation. Ultimately asesses sensitivity of learned constants for accuracy of the model.

Inputs:

Outputs:

8. .validate_with_permutation_test(n_permutations=100, epochs=1000, significance_level=0.05, verbose=True):

Tests whether the model learns properly by comparing against randomly shuffled target labels. If real model outperforms permuted models, the learned relationships are likely genuine and therefore accurate.

Inputs:

Outputs:

9. .validate_with_feature_shuffling(n_shuffles=50, epochs=1000, verbose=True):

Validates input feature importance by shuffling x-values to analyze x-y relationships in the differential equation. Real model should outperform shuffled versions if features have genuine results.

Inputs:

Outputs:

10. .validate_with_impossible_physics(epochs=2000, verbose=True):

Tests model robustness by training on external relationships (e.g., swapped input/output) to test impossibility & limits. A good model should perform poorly on this impossible physics test.

Inputs:

Outputs:

11. .run_all_robustness_tests(n_permutations=100, n_shuffles=50, verbose=True):

Executes the complete robustness validation functions above automatically, and provides comprehensive assessment of the PINN model.

Inputs:

Outputs:

PulsarPINN Key Attributes

Here, we go over the main attributes of the class:

Model Components

Data Storage

Training History viewer

Evaluate Results

Usage Notes

← Back to Technical Information Home