Celery 2.1 Documentationthe task until a worker server has consumed and executed it. Right now we have to check the worker log files to know what happened with the task. This is because we didn’t keep the AsyncResult object returned client, and not by a worker. logfile: The log file, can be passed on to get_logger() to gain access to the workers log file. See Logging. loglevel: The current log level used. delivery_info: Additional keys in this mapping. Logging You can use the workers logger to add diagnostic output to the worker log: class AddTask(Task): def run(self, x, y, **kwargs): logger = self.get_logger(**kwargs)0 码力 | 463 页 | 861.69 KB | 1 年前3
Celery 2.2 Documentationthe task until a worker server has consumed and executed it. Right now we have to check the worker log files to know what happened with the task. This is because we didn’t keep the AsyncResult object returned client, and not by a worker. logfile: The file the worker logs to. See Logging. loglevel: The current log level used. delivery_info: Additional message delivery information. This is a mapping containing add.request.kwargs)) Logging You can use the workers logger to add diagnostic output to the worker log: @task def add(x, y): logger = add.get_logger() logger.info("Adding %s + %s" % (x, y)) return0 码力 | 505 页 | 878.66 KB | 1 年前3
Celery 2.3 Documentationthe task until a worker server has consumed and executed it. Right now we have to check the worker log files to know what happened with the task. Applying a task returns an AsyncResult, if you have configured client, and not by a worker. logfile: The file the worker logs to. See Logging. loglevel: The current log level used. hostname: Hostname of the worker instance executing the task. delivery_info: Additional add.request.kwargs)) Logging You can use the workers logger to add diagnostic output to the worker log: @task def add(x, y): logger = add.get_logger() logger.info("Adding %s + %s" % (x, y)) return0 码力 | 530 页 | 900.64 KB | 1 年前3
Celery 2.4 Documentationthe task until a worker server has consumed and executed it. Right now we have to check the worker log files to know what happened with the task. Applying a task returns an AsyncResult, if you have configured client, and not by a worker. logfile: The file the worker logs to. See Logging. loglevel: The current log level used. hostname: Hostname of the worker instance executing the task. delivery_info: Additional add.request.kwargs)) Logging You can use the workers logger to add diagnostic output to the worker log: @task def add(x, y): logger = add.get_logger() logger.info("Adding %s + %s" % (x, y)) return0 码力 | 543 页 | 957.42 KB | 1 年前3
Celery 2.4 Documentationthe task until a worker server has consumed and executed it. Right now we have to check the worker log files to know what happened with the task. Applying a task returns an AsyncResult, if you have configured client, and not by a worker. logfile The file the worker logs to. See Logging. loglevel The current log level used. hostname Hostname of the worker instance executing the task. delivery_info Additional request.kwargs)) 2.2.3 Logging You can use the workers logger to add diagnostic output to the worker log: 2.2. Tasks 23 Celery Documentation, Release 2.4.7 @task def add(x, y): logger = add.get_logger()0 码力 | 395 页 | 1.54 MB | 1 年前3
Celery 3.1 Documentationsame pidfile and logfile arguments must be used when stopping. By default it will create pid and log files in the current directory, to protect against multiple workers launching on top of each other /var/run/celery $ mkdir -p /var/log/celery $ celery multi start w1 -A proj -l info -- pidfile=/var/run/celery/%n.pid \ -- logfile=/var/log/celery/%n%I.log With the multi command CELERY_ENABLE_UTC setting). logfile: The file the worker logs to. See Logging. loglevel: The current log level used. hostname: Hostname of the worker instance executing the task. delivery_info: Additional0 码力 | 887 页 | 1.22 MB | 1 年前3
Celery 3.1 Documentationsame pidfile and logfile arguments must be used when stopping. By default it will create pid and log files in the current directory, to protect against multiple workers launching on top of each other mkdir -p /var/run/celery $ mkdir -p /var/log/celery $ celery multi start w1 -A proj -l info --pidfile=/var/run/celery/%n.pid \ --logfile=/var/log/celery/%n%I.log With the multi command you can start multiple CELERY_ENABLE_UTC setting). logfile The file the worker logs to. See Logging. loglevel The current log level used. hostname Hostname of the worker instance executing the task. delivery_info Additional0 码力 | 607 页 | 2.27 MB | 1 年前3
Celery 2.5 Documentationthe task until a worker server has consumed and executed it. Right now we have to check the worker log files to know what happened with the task. Applying a task returns an AsyncResult, if you have configured client, and not by a worker. logfile: The file the worker logs to. See Logging. loglevel: The current log level used. hostname: Hostname of the worker instance executing the task. delivery_info: Additional add.request.kwargs)) Logging You can use the workers logger to add diagnostic output to the worker log: @task def add(x, y): logger = add.get_logger() logger.info("Adding %s + %s" % (x, y)) return0 码力 | 647 页 | 1011.88 KB | 1 年前3
Celery v4.0.1 Documentationthe same pidfile and logfile arguments must be used when stopping. By default it’ll create pid and log files in the current directory, to protect against multiple workers launching on top of each other /var/run/celery $ mkdir -p /var/log/celery $ celery multi start w1 -A proj -l info -- pidfile=/var/run/celery/%n.pid \ -- logfile=/var/log/celery/%n%I.log With the multi command practice is to create a common logger for all of your tasks at the top of your module: from celery.utils.log import get_task_logger logger = get_task_logger(__name__) @app.task def add(x, y): logger.info('Adding0 码力 | 1040 页 | 1.37 MB | 1 年前3
Celery v4.0.2 Documentationthe same pidfile and logfile arguments must be used when stopping. By default it’ll create pid and log files in the current directory, to protect against multiple workers launching on top of each other /var/run/celery $ mkdir -p /var/log/celery $ celery multi start w1 -A proj -l info -- pidfile=/var/run/celery/%n.pid \ -- logfile=/var/log/celery/%n%I.log With the multi command practice is to create a common logger for all of your tasks at the top of your module: from celery.utils.log import get_task_logger logger = get_task_logger(__name__) @app.task def add(x, y): logger.info('Adding0 码力 | 1042 页 | 1.37 MB | 1 年前3
共 51 条
- 1
- 2
- 3
- 4
- 5
- 6













