I need asynchronous, quick processing of everything in the queue. Jobs consist of CURL requests so it takes forever doing them 1 by 1 (They're basically the same as sleep(3)). I'd like all messages in the queue to run at the same time, or at least set a limit like 50. The reason I'm using a queue for this and not just running them instantly is because I need to make sure that if anything fails, it tries again.
Add a comment
|
1 Answer
Use the queue with iron.io ironMQ push, the queue shouldn't fail but in the unlikely even it does there is a log.
See this link for reference http://blog.iron.io/2013/05/laravel-4-ironmq-push-queues-insane.html
From memory you get 10 million requests free per month with ironMQ
3 Comments
Farzher
I tried push Queues, they're amazing, but if my server goes down, it'll eventually trash the messages after it retries a few times. It's hard to keep track of what's going on. They don't even keep track of the attempt counts for messages. I tried for a while to make it robust, couldn't figure it out. /:
Dave Ganley
If the server is down the push messages don't get deleted, even after the max number of retires. I had this happen recently with a third party causing the timeout and backing up thousands of mssages. I just manually pulled the queue and processed them once the third party was back working properly. I also logged the messages outbound and marked them complete after processing on the inbound
Farzher
So you're saying after max retries they just stop pushing, but will be there if you do pull requests? If that's true that might work. Although I'll have to synchronously process the failed ones. Maybe I'll just delete them and send more push requests haha. I'll mark as correct if that's true (: