2

I am trying to replicate the multimodal transformer tutorial shown here in a colab notebook. However, this is a relatively old script and lightning.pytorch has changed significantly. I've adapted it to the new lightning and it runs when I remove the callbacks argument in the Trainer, but when I add the callbacks it throws the following error:

/usr/local/lib/python3.9/dist-packages/lightning/pytorch/utilities/model_helpers.py in is_overridden(method_name, instance, parent)
     32             parent = pl.Callback
     33         if parent is None:
---> 34             raise ValueError("Expected a parent")
     35 
     36     from lightning_utilities.core.overrides import is_overridden as _is_overridden

ValueError: Expected a parent

Here's the fit method that calls the trainer

def fit(self):
    # print(self)
    self._set_seed(self.hparams.get("random_state", 42))
    # self.trainer = pl.Trainer()
    self.trainer = pl.Trainer(callbacks=self.trainer_params)

and the _get_trainer_params method that builds the callbacks list

def _get_trainer_params(self):
        checkpoint_callback = pl.callbacks.ModelCheckpoint(
            dirpath=self.output_path,
            # filepath=self.output_path,
            monitor=self.hparams.get(
                "checkpoint_monitor", "avg_val_loss"
            ),
            mode=self.hparams.get(
                "checkpoint_monitor_mode", "min"
            ),
            verbose=self.hparams.get("verbose", True)
    )

    early_stop_callback = pl.callbacks.EarlyStopping(
        monitor=self.hparams.get(
            "early_stop_monitor", "avg_val_loss"
        ),
        min_delta=self.hparams.get(
            "early_stop_min_delta", 0.001
        ),
        patience=self.hparams.get(
            "early_stop_patience", 3
        ),
        verbose=self.hparams.get("verbose", True),
    )

    trainer_params = [
        checkpoint_callback,
        early_stop_callback,
        self.output_path,
        self.hparams.get(
            "accumulate_grad_batches", 1
        ),
        self.hparams.get("n_gpu", 1),
        self.hparams.get("max_epochs", 100),
        self.hparams.get(
            "gradient_clip_value", 1
        )
    ]

Again, when I run the trainer without the callbacks, i.e. self.trainer = pl.Trainer() the model runs.

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.