 Celery 2.2 Documentationjpg The broker delivers tasks to the worker nodes. A worker node is a networked machine running celeryd. This can be one or more machines depending on the workload. The result of the task can be stored to import using the -I option to celeryd: $ celeryd -l info -I tasks,handlers This can be a single, or a comma separated list of task modules to import when celeryd starts. Running the celery worker will run the worker server in the foreground, so we can see what’s going on in the terminal: $ celeryd --loglevel=INFO In production you will probably want to run the worker in the background as a daemon0 码力 | 505 页 | 878.66 KB | 1 年前3 Celery 2.2 Documentationjpg The broker delivers tasks to the worker nodes. A worker node is a networked machine running celeryd. This can be one or more machines depending on the workload. The result of the task can be stored to import using the -I option to celeryd: $ celeryd -l info -I tasks,handlers This can be a single, or a comma separated list of task modules to import when celeryd starts. Running the celery worker will run the worker server in the foreground, so we can see what’s going on in the terminal: $ celeryd --loglevel=INFO In production you will probably want to run the worker in the background as a daemon0 码力 | 505 页 | 878.66 KB | 1 年前3
 Celery 2.1 Documentationdefaults Example configuration file Configuration Directives Cookbook Creating Tasks Running celeryd as a daemon Community Resources Resources News Contributing Community Code of Conduct Reporting worker App: Periodic Task Scheduler - celery.apps.beat Base Command - celery.bin.base celeryd - celery.bin.celeryd Celery Periodic Task Server - celery.bin.celerybeat celeryev: Curses Event Viewer - celery celery.bin.celeryctl caqmadm: AMQP API Command-line Shell - celery.bin.camqadm Celeryd Multi Tool - celery.bin.celeryd_multi Internals Celery Deprecation Timeline Internals: The worker Task Message0 码力 | 463 页 | 861.69 KB | 1 年前3 Celery 2.1 Documentationdefaults Example configuration file Configuration Directives Cookbook Creating Tasks Running celeryd as a daemon Community Resources Resources News Contributing Community Code of Conduct Reporting worker App: Periodic Task Scheduler - celery.apps.beat Base Command - celery.bin.base celeryd - celery.bin.celeryd Celery Periodic Task Server - celery.bin.celerybeat celeryev: Curses Event Viewer - celery celery.bin.celeryctl caqmadm: AMQP API Command-line Shell - celery.bin.camqadm Celeryd Multi Tool - celery.bin.celeryd_multi Internals Celery Deprecation Timeline Internals: The worker Task Message0 码力 | 463 页 | 861.69 KB | 1 年前3
 Celery 2.0 DocumentationMessaging settings Task execution settings Worker: celeryd Periodic Task Server: celerybeat Monitor Server: celerymon Cookbook Creating Tasks Running celeryd as a daemon Tutorials Tutorials and resources celery.bin.celeryd Celery Periodic Task Server - celery.bin.celerybeat celeryev: Curses Event Viewer - celery.bin.celeryev caqmadm: AMQP API Command-line Shell - celery.bin.camqadm Celeryd Multi Tool - - celery.bin.celeryd_multi Internals Celery Deprecation Timeline Internals: The worker Task Message Protocol List of Worker Events Module Index Internal Module Reference Change history 2.0.3 2.0.20 码力 | 284 页 | 332.71 KB | 1 年前3 Celery 2.0 DocumentationMessaging settings Task execution settings Worker: celeryd Periodic Task Server: celerybeat Monitor Server: celerymon Cookbook Creating Tasks Running celeryd as a daemon Tutorials Tutorials and resources celery.bin.celeryd Celery Periodic Task Server - celery.bin.celerybeat celeryev: Curses Event Viewer - celery.bin.celeryev caqmadm: AMQP API Command-line Shell - celery.bin.camqadm Celeryd Multi Tool - - celery.bin.celeryd_multi Internals Celery Deprecation Timeline Internals: The worker Task Message Protocol List of Worker Events Module Index Internal Module Reference Change history 2.0.3 2.0.20 码力 | 284 页 | 332.71 KB | 1 年前3
 Celery 2.3 Documentationjpg The broker delivers tasks to the worker nodes. A worker node is a networked machine running celeryd. This can be one or more machines depending on the workload. The result of the task can be stored to import using the -I option to celeryd: $ celeryd -l info -I tasks,handlers This can be a single, or a comma separated list of task modules to import when celeryd starts. Running the celery worker will run the worker server in the foreground, so we can see what’s going on in the terminal: $ celeryd --loglevel=INFO In production you will probably want to run the worker in the background as a daemon0 码力 | 530 页 | 900.64 KB | 1 年前3 Celery 2.3 Documentationjpg The broker delivers tasks to the worker nodes. A worker node is a networked machine running celeryd. This can be one or more machines depending on the workload. The result of the task can be stored to import using the -I option to celeryd: $ celeryd -l info -I tasks,handlers This can be a single, or a comma separated list of task modules to import when celeryd starts. Running the celery worker will run the worker server in the foreground, so we can see what’s going on in the terminal: $ celeryd --loglevel=INFO In production you will probably want to run the worker in the background as a daemon0 码力 | 530 页 | 900.64 KB | 1 年前3
 Celery 2.5 Documentationjpg The broker delivers tasks to the worker nodes. A worker node is a networked machine running celeryd. This can be one or more machines depending on the workload. The result of the task can be stored to import using the -I option to celeryd: $ celeryd -l info -I tasks,handlers This can be a single, or a comma separated list of task modules to import when celeryd starts. Running the celery worker will run the worker server in the foreground, so we can see what’s going on in the terminal: $ celeryd --loglevel=INFO In production you will probably want to run the worker in the background as a daemon0 码力 | 647 页 | 1011.88 KB | 1 年前3 Celery 2.5 Documentationjpg The broker delivers tasks to the worker nodes. A worker node is a networked machine running celeryd. This can be one or more machines depending on the workload. The result of the task can be stored to import using the -I option to celeryd: $ celeryd -l info -I tasks,handlers This can be a single, or a comma separated list of task modules to import when celeryd starts. Running the celery worker will run the worker server in the foreground, so we can see what’s going on in the terminal: $ celeryd --loglevel=INFO In production you will probably want to run the worker in the background as a daemon0 码力 | 647 页 | 1011.88 KB | 1 年前3
 Celery 2.4 Documentationjpg The broker delivers tasks to the worker nodes. A worker node is a networked machine running celeryd. This can be one or more machines depending on the workload. The result of the task can be stored to import using the -I option to celeryd: $ celeryd -l info -I tasks,handlers This can be a single, or a comma separated list of task modules to import when celeryd starts. Running the celery worker will run the worker server in the foreground, so we can see what’s going on in the terminal: $ celeryd --loglevel=INFO In production you will probably want to run the worker in the background as a daemon0 码力 | 543 页 | 957.42 KB | 1 年前3 Celery 2.4 Documentationjpg The broker delivers tasks to the worker nodes. A worker node is a networked machine running celeryd. This can be one or more machines depending on the workload. The result of the task can be stored to import using the -I option to celeryd: $ celeryd -l info -I tasks,handlers This can be a single, or a comma separated list of task modules to import when celeryd starts. Running the celery worker will run the worker server in the foreground, so we can see what’s going on in the terminal: $ celeryd --loglevel=INFO In production you will probably want to run the worker in the background as a daemon0 码力 | 543 页 | 957.42 KB | 1 年前3
 Celery 1.0 DocumentationMessaging settings Task execution settings Worker: celeryd Periodic Task Server: celerybeat Monitor Server: celerymon Cookbook Creating Tasks Running celeryd as a daemon Unit Testing Tutorials External contrib.abortable Django Views - celery.views Events - celery.events Celery Worker Daemon - celery.bin.celeryd Celery Periodic Task Server - celery.bin.celerybeat Celery Initialize - celery.bin.celeryinit caqmadm: jpg The broker pushes tasks to the worker servers. A worker server is a networked machine running celeryd. This can be one or more machines, depending on the workload. The result of the task can be stored0 码力 | 221 页 | 283.64 KB | 1 年前3 Celery 1.0 DocumentationMessaging settings Task execution settings Worker: celeryd Periodic Task Server: celerybeat Monitor Server: celerymon Cookbook Creating Tasks Running celeryd as a daemon Unit Testing Tutorials External contrib.abortable Django Views - celery.views Events - celery.events Celery Worker Daemon - celery.bin.celeryd Celery Periodic Task Server - celery.bin.celerybeat Celery Initialize - celery.bin.celeryinit caqmadm: jpg The broker pushes tasks to the worker servers. A worker server is a networked machine running celeryd. This can be one or more machines, depending on the workload. The result of the task can be stored0 码力 | 221 页 | 283.64 KB | 1 年前3
 Celery 3.1 DocumentationSoftTimeLimitExceeded: clean_up_in_a_hurry() Time limits can also be set using the CELERYD_TASK_TIME_LIMIT / CELERYD_TASK_SOFT_TIME_LIMIT settings. Note Time limits do not currently work on Windows and source C extensions. The option can be set using the workers –maxtasksperchild argument or using the CELERYD_MAX_TASKS_PER_CHILD setting. Autoscaling New in version 2.2. pool support: prefork, gevent The include load average or the amount of memory available. You can specify a custom autoscaler with the CELERYD_AUTOSCALER setting. Queues A worker instance can consume from any number of queues. By default0 码力 | 887 页 | 1.22 MB | 1 年前3 Celery 3.1 DocumentationSoftTimeLimitExceeded: clean_up_in_a_hurry() Time limits can also be set using the CELERYD_TASK_TIME_LIMIT / CELERYD_TASK_SOFT_TIME_LIMIT settings. Note Time limits do not currently work on Windows and source C extensions. The option can be set using the workers –maxtasksperchild argument or using the CELERYD_MAX_TASKS_PER_CHILD setting. Autoscaling New in version 2.2. pool support: prefork, gevent The include load average or the amount of memory available. You can specify a custom autoscaler with the CELERYD_AUTOSCALER setting. Queues A worker instance can consume from any number of queues. By default0 码力 | 887 页 | 1.22 MB | 1 年前3
 Celery v4.1.0 DocumentationCelery (4.1). For development docs, go here. Daemonization Generic init-scripts Init-script: celeryd Example configuration Using a login shell Example Django configuration Available options Init-script: OpenBSD, and other Unix-like platforms. Init-script: celeryd Usage: /etc/init.d/celeryd {start|stop|restart|status} Configuration file: /etc/default/celeryd To configure this script to run the worker properly or your configuration module). The daemonization script is configured by the file /etc/default/celeryd. This is a shell (sh) script where you can add environment variables like the configuration options0 码力 | 1057 页 | 1.35 MB | 1 年前3 Celery v4.1.0 DocumentationCelery (4.1). For development docs, go here. Daemonization Generic init-scripts Init-script: celeryd Example configuration Using a login shell Example Django configuration Available options Init-script: OpenBSD, and other Unix-like platforms. Init-script: celeryd Usage: /etc/init.d/celeryd {start|stop|restart|status} Configuration file: /etc/default/celeryd To configure this script to run the worker properly or your configuration module). The daemonization script is configured by the file /etc/default/celeryd. This is a shell (sh) script where you can add environment variables like the configuration options0 码力 | 1057 页 | 1.35 MB | 1 年前3
 Celery v4.0.1 DocumentationCelery (4.0). For development docs, go here. Daemonization Generic init-scripts Init-script: celeryd Example configuration Using a login shell Example Django configuration Available options Init-script: OpenBSD, and other Unix-like platforms. Init-script: celeryd Usage: /etc/init.d/celeryd {start|stop|restart|status} Configuration file: /etc/default/celeryd To configure this script to run the worker properly or your configuration module). The daemonization script is configured by the file /etc/default/celeryd. This is a shell (sh) script where you can add environment variables like the configuration options0 码力 | 1040 页 | 1.37 MB | 1 年前3 Celery v4.0.1 DocumentationCelery (4.0). For development docs, go here. Daemonization Generic init-scripts Init-script: celeryd Example configuration Using a login shell Example Django configuration Available options Init-script: OpenBSD, and other Unix-like platforms. Init-script: celeryd Usage: /etc/init.d/celeryd {start|stop|restart|status} Configuration file: /etc/default/celeryd To configure this script to run the worker properly or your configuration module). The daemonization script is configured by the file /etc/default/celeryd. This is a shell (sh) script where you can add environment variables like the configuration options0 码力 | 1040 页 | 1.37 MB | 1 年前3
共 149 条
- 1
- 2
- 3
- 4
- 5
- 6
- 15














