Scrapy 0.9 Documentationare declared by creating an scrapy.item.Item class an defining its attributes as scrapy.item.Field objects, like you will in an ORM (don’t worry if you’re not familiar with ORM’s, you will see that this is is in charge of processing the response and returning scraped data (as Item objects) and more URLs to follow (as Request objects). This is the code for our first Spider, save it in a file named dmoz_spider the content of both URLs. What just happened under the hood? Scrapy creates scrapy.http.Request objects for each URL in the start_urls attribute of the Spider, and assigns them the parse method of the0 码力 | 204 页 | 447.68 KB | 1 年前3
Scrapy 0.9 Documentationare declared by creating an scrapy.item.Item class an defining its attributes as scrapy.item.Field objects, like you will in an ORM (don’t worry if you’re not familiar with ORM’s, you will see that this is is in charge of processing the response and returning scraped data (as Item objects) and more URLs to follow (as Request objects). This is the code for our first Spider, save it in a file named dmoz_spider the content of both URLs. What just happened under the hood? Scrapy creates scrapy.http.Request objects for each URL in the start_urls attribute of the Spider, and assigns them the parse method of the0 码力 | 156 页 | 764.56 KB | 1 年前3
Scrapy 0.14 Documentationare declared by creating an scrapy.item.Item class an defining its attributes as scrapy.item.Field objects, like you will in an ORM (don’t worry if you’re not familiar with ORMs, you will see that this is is in charge of processing the response and returning scraped data (as Item objects) and more URLs to follow (as Request objects). This is the code for our first Spider; save it in a file named dmoz_spider the content of both URLs. What just happened under the hood? Scrapy creates scrapy.http.Request objects for each URL in the start_urls attribute of the Spider, and assigns them the parse method of the0 码力 | 235 页 | 490.23 KB | 1 年前3
Scrapy 0.14 Documentationare declared by creating an scrapy.item.Item class an defining its attributes as scrapy.item.Field objects, like you will in an ORM (don’t worry if you’re not familiar with ORMs, you will see that this is is in charge of processing the response and returning scraped data (as Item objects) and more URLs to follow (as Request objects). This is the code for our first Spider; save it in a file named dmoz_spider the content of both URLs. What just happened under the hood? Scrapy creates scrapy.http.Request objects for each URL in the start_urls attribute of the Spider, and assigns them the parse method of the0 码力 | 179 页 | 861.70 KB | 1 年前3
Scrapy 0.12 Documentationare declared by creating an scrapy.item.Item class an defining its attributes as scrapy.item.Field objects, like you will in an ORM (don’t worry if you’re not familiar with ORMs, you will see that this is is in charge of processing the response and returning scraped data (as Item objects) and more URLs to follow (as Request objects). This is the code for our first Spider; save it in a file named dmoz_spider the content of both URLs. What just happened under the hood? Scrapy creates scrapy.http.Request objects for each URL in the start_urls attribute of the Spider, and assigns them the parse method of the0 码力 | 228 页 | 462.54 KB | 1 年前3
Scrapy 0.12 Documentationare declared by creating an scrapy.item.Item class an defining its attributes as scrapy.item.Field objects, like you will in an ORM (don’t worry if you’re not familiar with ORMs, you will see that this is is in charge of processing the response and returning scraped data (as Item objects) and more URLs to follow (as Request objects). This is the code for our first Spider; save it in a file named dmoz_spider the content of both URLs. What just happened under the hood? Scrapy creates scrapy.http.Request objects for each URL in the start_urls attribute of the Spider, and assigns them the parse method of the0 码力 | 177 页 | 806.90 KB | 1 年前3
Flask Documentation (1.1.x)this part of the documentation is for you. API Application Object Blueprint Objects Incoming Request Data Response Objects Sessions Session Interface Test Client Test CLI Runner Application Globals Useful might find surprising or unorthodox. For example, Flask uses thread-local objects internally so that you don’t have to pass objects around from function to function within a request in order to stay threadsafe World! {% endif %} Inside templates you also have access to the request, session and g [1] objects as well as the get_flashed_messages() function. Templates are especially useful if inheritance is0 码力 | 428 页 | 895.98 KB | 1 年前3
Flask Documentation (1.1.x)might find surprising or unorthodox. For example, Flask uses thread-local objects internally so that you don’t have to pass objects around from function to function within a request in order to stay threadsafe World! {% endif %} Inside templates you also have access to the request, session and g1 objects as well as the get_flashed_messages() function. Templates are especially useful if inheritance Flask this information is provided by the global request object. If you have some experience with Python you might be wondering how that object can be global and how Flask manages to still be threadsafe0 码力 | 291 页 | 1.25 MB | 1 年前3
Scrapy 0.18 Documentationare declared by creating an scrapy.item.Item class an defining its attributes as scrapy.item.Field objects, like you will in an ORM (don’t worry if you’re not familiar with ORMs, you will see that this is is in charge of processing the response and returning scraped data (as Item objects) and more URLs to follow (as Request objects). This is the code for our first Spider; save it in a file named dmoz_spider the content of both URLs. What just happened under the hood? Scrapy creates scrapy.http.Request objects for each URL in the start_urls attribute of the Spider, and assigns them the parse method of the0 码力 | 201 页 | 929.55 KB | 1 年前3
Scrapy 0.16 Documentationare declared by creating an scrapy.item.Item class an defining its attributes as scrapy.item.Field objects, like you will in an ORM (don’t worry if you’re not familiar with ORMs, you will see that this is is in charge of processing the response and returning scraped data (as Item objects) and more URLs to follow (as Request objects). This is the code for our first Spider; save it in a file named dmoz_spider the content of both URLs. What just happened under the hood? Scrapy creates scrapy.http.Request objects for each URL in the start_urls attribute of the Spider, and assigns them the parse method of the0 码力 | 203 页 | 931.99 KB | 1 年前3
共 518 条
- 1
- 2
- 3
- 4
- 5
- 6
- 52













