Background
I have a Django app that allows record insertion via the Django REST Framework.
Records will be periodically batch-inserted row-by-row by client applications that interrogate spreadsheets and other databases. The REST API allows these other applications, which handle data transformation, etc, to be abstracted from Django.
Problem
I'd like to decouple the actual record insertion from the API to improve fault tolerance and the potential for scalability.
Suggested Approach
I am considering doing this with Celery, but I've not used it before. I'm considering overriding perform_create() in my existing DRF ModelViewSets (perform_create() was added in DRF 3.0) to create Celery tasks that workers would grab and process in the background.
The DRF documentation says that perform_create() should "should save the object instance by calling serializer.save()". I'm wondering whether, in my case, I could ignore this recommendation and instead have my Celery tasks call on the appropriate serializer to perform the object saves.
Example
If for example I've got a couple of models:
class Book(models.Model):
name = models.CharField(max_length=32)
class Author(models.Model):
surname = models.CharField(max_length=32)
And I've got DRF views and serializers for those models:
class BookSerializer(serializers.ModelSerializer):
class Meta:
model = Book
class AuthorSerializer(serializers.ModelSerializer):
class Meta:
model = Author
class BookViewSet(viewsets.ModelViewSet):
queryset = Book.objects.all()
serializer_class = Book
class AuthorViewSet(viewsets.ModelViewSet):
queryset = Author.objects.all()
serializer_class = Author
Would it be a good idea to override perform_create() in e.g. BookViewSet:
def perform_create(self, serializer):
create_book_task(serializer.data)
Where create_book_task is separately something like:
@shared_task
def create_book_task(data):
serializer = BookSerializer(data=data)
serializer.save()
I've not really been able to find any examples of other developers doing something similar or trying to solve the same problem. Am I overcomplicating it? My database is still going to be the limiting factor when it comes to physical insertion, but at least it won't block the API clients from queueing up their data. I am not committed to Celery if it isn't suitable. Is this the best solution, are there obvious problems with it, or are there better alternatives?