Scrapy 2.11 Documentationspiders. Coroutines Use the coroutine syntax [https://docs.python.org/3/reference/compound_stmts.html#async]. asyncio Use asyncio [https://docs.python.org/3/library/asyncio.html#module-asyncio] and asyncio downloaded by Scrapy and then their response handled by the specified callback. 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml content as argument to an XPath string function [https://www.w3.org/TR/xpath/all/#section-String-Functions], avoid using .//text() and use just . instead. This is because the expression .//text() yields0 码力 | 528 页 | 706.01 KB | 1 年前3
 Scrapy 2.11.1 Documentationspiders. Coroutines Use the coroutine syntax [https://docs.python.org/3/reference/compound_stmts.html#async]. asyncio Use asyncio [https://docs.python.org/3/library/asyncio.html#module-asyncio] and asyncio downloaded by Scrapy and then their response handled by the specified callback. 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml content as argument to an XPath string function [https://www.w3.org/TR/xpath/all/#section-String-Functions], avoid using .//text() and use just . instead. This is because the expression .//text() yields0 码力 | 528 页 | 706.01 KB | 1 年前3
 Scrapy 2.10 Documentationspiders. Coroutines Use the coroutine syntax [https://docs.python.org/3/reference/compound_stmts.html#async]. asyncio Use asyncio [https://docs.python.org/3/library/asyncio.html#module-asyncio] and asyncio downloaded by Scrapy and then their response handled by the specified callback. 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml content as argument to an XPath string function [https://www.w3.org/TR/xpath/all/#section-String-Functions], avoid using .//text() and use just . instead. This is because the expression .//text() yields0 码力 | 519 页 | 697.14 KB | 1 年前3
 Scrapy 2.7 Documentationspiders. Coroutines Use the coroutine syntax [https://docs.python.org/3/reference/compound_stmts.html#async]. asyncio Use asyncio [https://docs.python.org/3/library/asyncio.html#module-asyncio] and asyncio downloaded by Scrapy and then their response handled by the specified callback. 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml content as argument to an XPath string function [https://www.w3.org/TR/xpath/all/#section-String-Functions], avoid using .//text() and use just . instead. This is because the expression .//text() yields0 码力 | 490 页 | 682.20 KB | 1 年前3
 Scrapy 2.9 Documentationspiders. Coroutines Use the coroutine syntax [https://docs.python.org/3/reference/compound_stmts.html#async]. asyncio Use asyncio [https://docs.python.org/3/library/asyncio.html#module-asyncio] and asyncio downloaded by Scrapy and then their response handled by the specified callback. 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml content as argument to an XPath string function [https://www.w3.org/TR/xpath/all/#section-String-Functions], avoid using .//text() and use just . instead. This is because the expression .//text() yields0 码力 | 503 页 | 686.52 KB | 1 年前3
 Scrapy 2.8 Documentationspiders. Coroutines Use the coroutine syntax [https://docs.python.org/3/reference/compound_stmts.html#async]. asyncio Use asyncio [https://docs.python.org/3/library/asyncio.html#module-asyncio] and asyncio downloaded by Scrapy and then their response handled by the specified callback. 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml content as argument to an XPath string function [https://www.w3.org/TR/xpath/all/#section-String-Functions], avoid using .//text() and use just . instead. This is because the expression .//text() yields0 码力 | 495 页 | 686.89 KB | 1 年前3
 Scrapy 2.10 Documentationthe specified callback. 3.2. Spiders 35 Scrapy Documentation, Release 2.10.1 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml regular expressions so lxml’s implementation uses hooks to Python’s re module. Thus, using regexp functions in your XPath expressions may add a small performance penalty. Set operations These can be handy input and output processors must receive an iterable as their first argument. The output of those functions can be anything. The result of input processors will be appended to an internal list (in the Loader)0 码力 | 419 页 | 1.73 MB | 1 年前3
 Scrapy 2.11.1 Documentationthe specified callback. 3.2. Spiders 35 Scrapy Documentation, Release 2.11.1 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml regular expressions so lxml’s implementation uses hooks to Python’s re module. Thus, using regexp functions in your XPath expressions may add a small performance penalty. Set operations These can be handy input and output processors must receive an iterable as their first argument. The output of those functions can be anything. The result of input processors will be appended to an internal list (in the Loader)0 码力 | 425 页 | 1.76 MB | 1 年前3
 Scrapy 2.11 Documentationthe specified callback. 3.2. Spiders 35 Scrapy Documentation, Release 2.11.1 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml regular expressions so lxml’s implementation uses hooks to Python’s re module. Thus, using regexp functions in your XPath expressions may add a small performance penalty. Set operations These can be handy input and output processors must receive an iterable as their first argument. The output of those functions can be anything. The result of input processors will be appended to an internal list (in the Loader)0 码力 | 425 页 | 1.76 MB | 1 年前3
 Scrapy 2.11.1 Documentationthe specified callback. 3.2. Spiders 35 Scrapy Documentation, Release 2.11.1 3. In callback functions, you parse the page contents, typically using Selectors (but you can also use BeautifulSoup, lxml regular expressions so lxml’s implementation uses hooks to Python’s re module. Thus, using regexp functions in your XPath expressions may add a small performance penalty. Set operations These can be handy input and output processors must receive an iterable as their first argument. The output of those functions can be anything. The result of input processors will be appended to an internal list (in the Loader)0 码力 | 425 页 | 1.79 MB | 1 年前3
共 62 条
- 1
 - 2
 - 3
 - 4
 - 5
 - 6
 - 7
 













