 Celery 1.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concur- rency Tasks are executed in parallel using the multiprocessing module. Schedul- ing Supports recurring tasks like cron, or specifying That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now task: >>> result = add.delay(4, 4) >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() # Waits0 码力 | 123 页 | 400.69 KB | 1 年前3 Celery 1.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concur- rency Tasks are executed in parallel using the multiprocessing module. Schedul- ing Supports recurring tasks like cron, or specifying That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now task: >>> result = add.delay(4, 4) >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() # Waits0 码力 | 123 页 | 400.69 KB | 1 年前3
 Celery 1.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concurrency Tasks are executed in parallel using the multiprocessing module. Scheduling Supports recurring tasks like cron, or specifying That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now task: >>> result = add.delay(4, 4) >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get()0 码力 | 221 页 | 283.64 KB | 1 年前3 Celery 1.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concurrency Tasks are executed in parallel using the multiprocessing module. Scheduling Supports recurring tasks like cron, or specifying That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now task: >>> result = add.delay(4, 4) >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get()0 码力 | 221 页 | 283.64 KB | 1 年前3
 Celery 2.3 Documentationit. There are more options available, like how many processes you want to use to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now what you can do when you have results: >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() # running tasks. With smaller tasks you can process more tasks in parallel and the tasks won’t run long enough to block the worker from processing other waiting tasks. However, executing a task does have overhead0 码力 | 334 页 | 1.25 MB | 1 年前3 Celery 2.3 Documentationit. There are more options available, like how many processes you want to use to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now what you can do when you have results: >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() # running tasks. With smaller tasks you can process more tasks in parallel and the tasks won’t run long enough to block the worker from processing other waiting tasks. However, executing a task does have overhead0 码力 | 334 页 | 1.25 MB | 1 年前3
 Celery 2.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concur- rency Tasks are executed in parallel using the multiprocessing module. Schedul- ing Supports recurring tasks like cron, or specifying That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now AsyncResult: >>> result = add.delay(4, 4) >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() #0 码力 | 165 页 | 492.43 KB | 1 年前3 Celery 2.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concur- rency Tasks are executed in parallel using the multiprocessing module. Schedul- ing Supports recurring tasks like cron, or specifying That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now AsyncResult: >>> result = add.delay(4, 4) >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() #0 码力 | 165 页 | 492.43 KB | 1 年前3
 Celery 2.3 Documentationit. There are more options available, like how many processes you want to use to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now you can do when you have results: >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() running tasks. With smaller tasks you can process more tasks in parallel and the tasks won’t run long enough to block the worker from processing other waiting tasks. However, executing a task does have overhead0 码力 | 530 页 | 900.64 KB | 1 年前3 Celery 2.3 Documentationit. There are more options available, like how many processes you want to use to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now you can do when you have results: >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() running tasks. With smaller tasks you can process more tasks in parallel and the tasks won’t run long enough to block the worker from processing other waiting tasks. However, executing a task does have overhead0 码力 | 530 页 | 900.64 KB | 1 年前3
 Celery 3.1 Documentationoperations with the tools required to maintain such a system. It’s a task queue with focus on real-time processing, while also supporting task scheduling. Celery has a large and diverse community of users and a task: >>> result = add.delay(4, 4) The ready() method returns whether the task has finished processing or not: >>> result.ready() False You can wait for the result to complete, but this is rarely backend argument to Celery). Let’s look at some examples: Groups A group calls a list of tasks in parallel, and it returns a special result instance that lets you inspect the results as a group, and retrieve0 码力 | 887 页 | 1.22 MB | 1 年前3 Celery 3.1 Documentationoperations with the tools required to maintain such a system. It’s a task queue with focus on real-time processing, while also supporting task scheduling. Celery has a large and diverse community of users and a task: >>> result = add.delay(4, 4) The ready() method returns whether the task has finished processing or not: >>> result.ready() False You can wait for the result to complete, but this is rarely backend argument to Celery). Let’s look at some examples: Groups A group calls a list of tasks in parallel, and it returns a special result instance that lets you inspect the results as a group, and retrieve0 码力 | 887 页 | 1.22 MB | 1 年前3
 Celery 2.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concurrency Tasks are executed in parallel using the multiprocessing module. Scheduling Supports recurring tasks like cron, or specifying That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now AsyncResult: >>> result = add.delay(4, 4) >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get()0 码力 | 284 页 | 332.71 KB | 1 年前3 Celery 2.0 Documentationyour dads laptop while the queue is temporarily overloaded). Concurrency Tasks are executed in parallel using the multiprocessing module. Scheduling Supports recurring tasks like cron, or specifying That’s it. There are more options available, like how many processes you want to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now AsyncResult: >>> result = add.delay(4, 4) >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get()0 码力 | 284 页 | 332.71 KB | 1 年前3
 Celery 2.4 Documentationit. There are more options available, like how many processes you want to use to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now what you can do when you have results: >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() running tasks. With smaller tasks you can process more tasks in parallel and the tasks won’t run long enough to block the worker from processing other waiting tasks. However, executing a task does have overhead0 码力 | 543 页 | 957.42 KB | 1 年前3 Celery 2.4 Documentationit. There are more options available, like how many processes you want to use to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now what you can do when you have results: >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None >>> result.get() running tasks. With smaller tasks you can process more tasks in parallel and the tasks won’t run long enough to block the worker from processing other waiting tasks. However, executing a task does have overhead0 码力 | 543 页 | 957.42 KB | 1 年前3
 Celery 3.1 Documentationoperations with the tools required to maintain such a system. It’s a task queue with focus on real-time processing, while also supporting task scheduling. Celery has a large and diverse community of users and a task: >>> result = add.delay(4, 4) The ready() method returns whether the task has finished processing or not: >>> result.ready() False You can wait for the result to complete, but this is rarely backend argument to Celery). Let’s look at some examples: Groups A group calls a list of tasks in parallel, and it returns a special result instance that lets you inspect the results as a group, and retrieve0 码力 | 607 页 | 2.27 MB | 1 年前3 Celery 3.1 Documentationoperations with the tools required to maintain such a system. It’s a task queue with focus on real-time processing, while also supporting task scheduling. Celery has a large and diverse community of users and a task: >>> result = add.delay(4, 4) The ready() method returns whether the task has finished processing or not: >>> result.ready() False You can wait for the result to complete, but this is rarely backend argument to Celery). Let’s look at some examples: Groups A group calls a list of tasks in parallel, and it returns a special result instance that lets you inspect the results as a group, and retrieve0 码力 | 607 页 | 2.27 MB | 1 年前3
 Celery 2.4 Documentationit. There are more options available, like how many processes you want to use to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now what you can do when you have results: >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None 1.3. First steps with running tasks. With smaller tasks you can process more tasks in parallel and the tasks won’t run long enough to block the worker from processing other waiting tasks. However, executing a task does have overhead0 码力 | 395 页 | 1.54 MB | 1 年前3 Celery 2.4 Documentationit. There are more options available, like how many processes you want to use to process work in parallel (the CELERY_CONCURRENCY setting), and we could use a persistent result store backend, but for now what you can do when you have results: >>> result.ready() # returns True if the task has finished processing. False >>> result.result # task is not ready, so no return value yet. None 1.3. First steps with running tasks. With smaller tasks you can process more tasks in parallel and the tasks won’t run long enough to block the worker from processing other waiting tasks. However, executing a task does have overhead0 码力 | 395 页 | 1.54 MB | 1 年前3
共 51 条
- 1
- 2
- 3
- 4
- 5
- 6














