I am trying to use an Accelerator with a Trainer using the code bellow:
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path)
config = AutoConfig.from_pretrained(model_args.model_name_or_path)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_args.model_name_or_path, config=config)
collator = DataCollatorForSeq2Seq(tokenizer, model=model)
train_set = CorefDataset(tokenizer, data_args, training_args, 'train')
tb_callback = TensorBoardCallback()
accelerator = Accelerator()
trainer = accelerator.prepare(CorefTrainer(
tokenizer=tokenizer,
model=model,
args=training_args,
train_dataset=train_set,
# eval_dataset=dev_set,
data_collator=collator,
callbacks=[tb_callback]
))
trainer.train()
Then following the instructions in this post, I ran this code with the command in the Google Colab:
!accelerate launch --config_file /root/.cache/huggingface/accelerate/default_config.yaml Seq2seqCoref/main.py
Then I got the following error:
Traceback (most recent call last):
File "/content/Seq2seqCoref/main.py", line 41, in <module>
trainer = accelerator.prepare(CorefTrainer(
File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 1248, in prepare
if self.distributed_type == DistributedType.DEEPSPEED:
File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 529, in distributed_type
return self.state.distributed_type
File "/usr/local/lib/python3.10/dist-packages/accelerate/state.py", line 1076, in __getattr__
raise AttributeError(
AttributeError: `AcceleratorState` object has no attribute `distributed_type`. This happens if `AcceleratorState._reset_state()` was called and an `Accelerator` or `PartialState` was not reinitialized.
The versions of transformers and accelerate libraries are 4.40.2 and 0.30.0, respectively.
Before, I tried the code directly in the Google colab instead of using the main.py. However, the same error appears.