I'm trying to prepare some audio data for a Dataloader but I'm struggling with a few points. At the moment my data is organised into two lists inputs and target, which are both length 32, but have different dimensions for their elements; inputs[0].shape = (8, 3690288) (8 mono audio tracks) and target[0].shape = (2, 3690288) (a single stereo mix).
I've converted each array to a tensor by:
tensor_inputs = torch.Tensor(inputs)
tensor_target = torch.Tensor(target)
which seems to work: tensor_inputs.shape = torch.Size([32, 8, 3690288]). I've then tried to convert each of these to a melspectrogram:
melspectrogram = torchaudio.transforms.melspectrogram(
sr=44100,
n_fft=1024,
hop_length=512,
n_mels=64)
tensor_input_specs = []
for i in range(len(tensor_inputs)):
spec = mel_spectrogram(tensor_inputs[i])
tensor_input_specs.append(spec)
tensor_target_specs = []
for i in range(len(tensor_target)):
spec = mel_spectrogram(tensor_target[i])
tensor_target_specs.append(spec)
and then move these into a Dataloader by doing:
dataset = TensorDataset(tensor_input_specs,tensor_target_specs)
iter = DataLoader(dataset)
However I get the following error: AttributeError: 'list' object has no attribute 'size', which I imagine is due to the fact that I'm appending the spectrograms to a list, but I'm not sure how else to achieve this.
EDIT:
AttributeError Traceback (most recent call last)
C:\Users\BRUDAL~1\AppData\Local\Temp/ipykernel_24968/2240294361.py in <module>
14 tensor_target_specs.append(spec)
15
---> 16 dataset = TensorDataset(tensor_input_specs,tensor_target_specs)
17 iter = DataLoader(dataset) # create your dataloader
~\anaconda3\lib\site-packages\torch\utils\data\dataset.py in __init__(self, *tensors)
165
166 def __init__(self, *tensors: Tensor) -> None:
--> 167 assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors), "Size mismatch between tensors"
168 self.tensors = tensors
169
~\anaconda3\lib\site-packages\torch\utils\data\dataset.py in <genexpr>(.0)
165
166 def __init__(self, *tensors: Tensor) -> None:
--> 167 assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors), "Size mismatch between tensors"
168 self.tensors = tensors
169
AttributeError: 'list' object has no attribute 'size'
sizeattribute.-->.assert all(tensors[0].size(0) == tensor.size(0). You need to pass tensors not a list. Obviously list does not havesizeattribute, it useslen. I would suggest you to print thetypeoftensor_input_specsandtensor_target_specsto make sure they are tensor.tensor_target_specs = []. Usetorch.emptyand then append usingtorch.cat.torch.stack(tensor_target_specs). I'll update with an answer soon.