Celery v5.0.5 Documentationbackend. for using Elasticsearch as a result backend. for using Riak as a result backend. for using AWS DynamoDB as a result backend. for using Zookeeper as a message transport. for using SQLAlchemy as 'sqs://ABCDEFGHIJKLMNOPQRST:ZYXK7NiynGlTogH8Nj+P9nlE73sq3@' where the URL format is: sqs://aws_access_key_id:aws_secret_access_key@ Please note that you must remember to include the @ sign at the end and safequote aws_access_key = safequote("ABCDEFGHIJKLMNOPQRST") aws_secret_key = safequote("ZYXK7NiynG/TogH8Nj+P9nlE73sq3") broker_url = "sqs://{aws_access_key}:{aws_secret_key}@".format( aws_access_key=aws_access_key0 码力 | 2315 页 | 2.14 MB | 1 年前3
Celery v5.0.1 Documentationbackend. for using Elasticsearch as a result backend. for using Riak as a result backend. for using AWS DynamoDB as a result backend. for using Zookeeper as a message transport. for using SQLAlchemy as 'sqs://ABCDEFGHIJKLMNOPQRST:ZYXK7NiynGlTogH8Nj+P9nlE73sq3@' where the URL format is: sqs://aws_access_key_id:aws_secret_access_key@ Please note that you must remember to include the @ sign at the end and safequote aws_access_key = safequote("ABCDEFGHIJKLMNOPQRST") aws_secret_key = safequote("ZYXK7NiynG/TogH8Nj+P9nlE73sq3") broker_url = "sqs://{aws_access_key}:{aws_secret_key}@".format( aws_access_key=aws_access_key0 码力 | 2313 页 | 2.13 MB | 1 年前3
Celery v5.0.2 Documentationbackend. for using Elasticsearch as a result backend. for using Riak as a result backend. for using AWS DynamoDB as a result backend. for using Zookeeper as a message transport. for using SQLAlchemy as 'sqs://ABCDEFGHIJKLMNOPQRST:ZYXK7NiynGlTogH8Nj+P9nlE73sq3@' where the URL format is: sqs://aws_access_key_id:aws_secret_access_key@ Please note that you must remember to include the @ sign at the end and safequote aws_access_key = safequote("ABCDEFGHIJKLMNOPQRST") aws_secret_key = safequote("ZYXK7NiynG/TogH8Nj+P9nlE73sq3") broker_url = "sqs://{aws_access_key}:{aws_secret_key}@".format( aws_access_key=aws_access_key0 码力 | 2313 页 | 2.14 MB | 1 年前3
Celery v5.0.0 Documentationbackend. for using Elasticsearch as a result backend. for using Riak as a result backend. for using AWS DynamoDB as a result backend. for using Zookeeper as a message transport. for using SQLAlchemy as 'sqs://ABCDEFGHIJKLMNOPQRST:ZYXK7NiynGlTogH8Nj+P9nlE73sq3@' where the URL format is: sqs://aws_access_key_id:aws_secret_access_key@ Please note that you must remember to include the @ sign at the end and safequote aws_access_key = safequote("ABCDEFGHIJKLMNOPQRST") aws_secret_key = safequote("ZYXK7NiynG/TogH8Nj+P9nlE73sq3") broker_url = "sqs://{aws_access_key}:{aws_secret_key}@".format( aws_access_key=aws_access_key0 码力 | 2309 页 | 2.13 MB | 1 年前3
Scrapy 1.0 Documentation], it can also be used to extract data using APIs (such as Amazon Associates Web Services [http://aws.amazon.com/associates/]) or as a general purpose web crawler. Walk-through of an example spider In change the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3 [http://aws.amazon.com/s3/], for example). You can also write an item pipeline to store the items in a database- 4
....:- 5
....:- 6
....: """) >>> xp = lambda x: sel.xpath(x).extract() This gets all first- elements under whatever it is its parent: >>>
0 码力 | 303 页 | 533.88 KB | 1 年前3
Scrapy 1.7 Documentationchange the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3 [https://aws.amazon.com/s3/], for example). You can also write an item pipeline to store the items in a database- 4
....:- 5
....:- 6
....: """) >>> xp = lambda x: sel.xpath(x).getall() This gets all first- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a
0 码力 | 391 页 | 598.79 KB | 1 年前3
Scrapy 0.12 Documentation), it can also be used to extract data using APIs (such as Amazon Associates Web Services [http://aws.amazon.com/associates/]) or as a general purpose web crawler. The purpose of this document is to introduce change the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3 [http://aws.amazon.com/s3/], for example). You can also write an item pipeline to store the items in a database a new one, or return None to ignore the link altogether. If not given, process_value defaults to lambda x: x. For example, to extract links from this code: 0 码力 | 228 页 | 462.54 KB | 1 年前3
Scrapy 1.6 Documentationchange the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3 [https://aws.amazon.com/s3/], for example). You can also write an item pipeline to store the items in a database- 4
....:- 5
....:- 6
....: """) >>> xp = lambda x: sel.xpath(x).getall() This gets all first- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a
0 码力 | 374 页 | 581.88 KB | 1 年前3
Scrapy 1.4 Documentationchange the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3 [https://aws.amazon.com/s3/], for example). You can also write an item pipeline to store the items in a database- 4
....:- 5
....:- 6
....: """) >>> xp = lambda x: sel.xpath(x).extract() This gets all first- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a
0 码力 | 394 页 | 589.10 KB | 1 年前3
Scrapy 1.1 Documentationchange the export format (XML or CSV, for example) or the storage backend (FTP or Amazon S3 [https://aws.amazon.com/s3/], for example). You can also write an item pipeline to store the items in a database- 4
....:- 5
....:- 6
....: """) >>> xp = lambda x: sel.xpath(x).extract() This gets all first- elements under whatever it is its parent: >>> stop_on_none=False. Example: >>> from scrapy.loader.processors import Compose >>> proc = Compose(lambda v: v[0], str.upper) >>> proc(['hello', 'world']) 'HELLO' Each function can optionally receive a
0 码力 | 322 页 | 582.29 KB | 1 年前3
共 430 条
- 1
- 2
- 3
- 4
- 5
- 6
- 43













