1

I am using laravel 8 for my application and every is working fine except sometimes my queue jobs are running twice which is causing database ledger balance to update twice .

I found out this issue before also but neglected thinking my code might be wrong but yesterday also I encountered same issues. This issues is not happening with all jobs but in very few like 1-2 jobs in 100 jobs. I am using database driver for managing jobs.

Here is my code implementation

I using following in my job file

    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;


    public $retryAfter = 85; 
    public $timeout = 80;
    public $tries = 3;

my code is not calling jobs twice I have checked it also I am using supervisor for managing queues and I have attached supervisor config.

I am using below code to dispatch job

UpdateDailyLedgerReport::dispatch($this->ledger_id, $this->name, $type, $date, $amount);

I have checked my code multiple times but there is no trace of calling jobs twice and the things is sometime while update two ledger balances i.e model with equal amount one model is getting update with correct amount but second model balance is updated twice .

Same issue I have seen with my another job .

Thanks.

Previous my jobs timeout was only 30 sec but I have increased it and still the issues is there.

Just now I have checked and found out following is causing isseus:

SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction (SQL: delete from jobs where id = 10892) {"exception":"[object] (Illuminate\Database\QueryException(code: 40001): SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction (SQL: delete from jobs where id = 10892) at /var/www/vhosts/poshbooks.in/httpdocs/vendor/laravel/framework/src/Illuminate/Database/Connection.php:712)

can anyone help me to fix it

17
  • 1
    To help debug this I would generate a unique code (uniqid should be enough) and pass to the job and then write the hash of the processing job in a log. That way you know if the same unique code is being processed then its the scheduler at fault otherwise it's the dispatcher and you can take it from there. Commented Sep 28, 2023 at 8:17
  • can you share its implementation to log it ?? Commented Sep 28, 2023 at 8:30
  • Change your UpdateDailyLedgerReport constructor to e.g. public function __construct($ledgerId, $name, $date, $amount, private string $jobId) then in your handle method do \Log::debug("Processing ledger job ".$this->jobId) and dispatch it as UpdateDailyLedgerReport::dispatch($this->ledger_id, $this->name, $type, $date, $amount, uniqid()); Commented Sep 28, 2023 at 8:49
  • thanks , I have added this implementation but don't know when the issue will be replicated .. do you have any idea about it or my database driver for queue is creating the issue? Commented Sep 28, 2023 at 9:14
  • I have check old logs and found out that jobs is processing only once but database update it twice : [2023-09-25 22:08:10][10892] Processing: App\Jobs\UpdateDailyLedgerReport [2023-09-25 22:08:10][10892] Processed: App\Jobs\UpdateDailyLedgerReport Commented Sep 28, 2023 at 9:34

1 Answer 1

0

The jobs must be idempotent. Meaning if they execute more than once they should not break your logic. Increment the ledgers without jobs. On deploys it could happen if the sigterm signal is not handled properly.

How to handle SIGTERM signal on laravel 8,9,10 deploy with docker containers in regards to scheduler and worker?

Sign up to request clarification or add additional context in comments.

7 Comments

thanks but can you explain what should I do in my scenario also why this deadlock issue is coming ... framework should handle it right ?
SQLSTATE[40001]: Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction (SQL: delete from jobs where id = 10892) {"exception":"[object] this means that 2 workers picked up the job in parallel and the first to finish it can't remove the job from db. DB queue is for DEVELOPMENT ONLY. Use Sqs from aws or other solution.
thanks ... Previously I was using timeout to 5 sec and retry to 7 sec but I have encountered following error so I increased : App\Jobs\UpdateDailyLedgerReport has been attempted too many times or run too long. The job may have previously timed out.
can you tell me setting for aws sqs for queues which executing very quickly and for queus which takes lot of time like reports whcih takes 10- 15 to generate just as an example.
max visibility timeout in sqs is 15 minutes if I remember right.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.