Apache HTTP Server Documentation Version 2.0really annoying robot from retrieving pages of a specific webarea? A /robots.txt file containing entries of the "Robot Exclusion Protocol" is typically not enough to get rid of such a robot. 2.20. URL REWRITING indexed area where the robot traversal would create big server load). We have to make sure that we forbid access only to the particular robot, i.e. just forbidding the host where the robot runs is not enough header. The following two lines have the same effect: BrowserMatchNoCase Robot is a robot SetEnvIfNoCase User-Agent Robot is a robot Some additional examples: BrowserMatch ˆMozilla forms jpeg=yes browser=netscape0 码力 | 682 页 | 2.05 MB | 1 年前3
httpd 2.2.29 中文文档header. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot Some additional examples: BrowserMatch ^Mozilla forms jpeg=yes browser=netscape directives. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot SetEnvIf Directive Description: Sets environment variables based on In this recipe, we discuss how to block persistent requests from a particular robot, or user agent. The standard for robot exclusion defines a file, /robots.txt that specifies those portions of your website0 码力 | 1854 页 | 1.48 MB | 1 年前3
httpd 2.2.27 中文文档header. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot Some additional examples: BrowserMatch ^Mozilla forms jpeg=yes browser=netscape directives. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot SetEnvIf Directive Description: Sets environment variables based on In this recipe, we discuss how to block persistent requests from a particular robot, or user agent. The standard for robot exclusion defines a file, /robots.txt that specifies those portions of your website0 码力 | 1849 页 | 1.47 MB | 1 年前3
httpd 2.2.31 中文文档header. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot Some additional examples: BrowserMatch ^Mozilla forms jpeg=yes browser=netscape directives. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot SetEnvIf Directive Description: Sets environment variables based on In this recipe, we discuss how to block persistent requests from a particular robot, or user agent. The standard for robot exclusion defines a file, /robots.txt that specifies those portions of your website0 码力 | 1860 页 | 1.48 MB | 1 年前3
httpd 2.2.27.dev 中文文档header. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot Some additional examples: BrowserMatch ^Mozilla forms jpeg=yes browser=netscape directives. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot SetEnvIf Directive Description: Sets environment variables based on In this recipe, we discuss how to block persistent requests from a particular robot, or user agent. The standard for robot exclusion defines a file, /robots.txt that specifies those portions of your website0 码力 | 1849 页 | 1.47 MB | 1 年前3
httpd 2.2.32 中文文档header. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot Some additional examples: BrowserMatch ^Mozilla forms jpeg=yes browser=netscape directives. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot SetEnvIf Directive Description: Sets environment variables based on In this recipe, we discuss how to block persistent requests from a particular robot, or user agent. The standard for robot exclusion defines a file, /robots.txt that specifies those portions of your website0 码力 | 1866 页 | 1.48 MB | 1 年前3
Apache HTTP Server Documentation Version 2.2In this recipe, we discuss how to block persistent requests from a particular robot, or user agent. The standard for robot exclusion defines a file, /robots.txt that specifies those portions of your website protected, and the client USER AGENT that identifies the malicious or persistent robot. In this example, we are blocking a robot called NameOfBadRobot from a location /secret/files. You may also specify an header. The following two lines have the same effect: BrowserMatchNoCase Robot is a robot SetEnvIfNoCase User-Agent Robot is a robot Some additional examples: BrowserMatch ˆMozilla forms jpeg=yes browser=netscape0 码力 | 805 页 | 2.51 MB | 1 年前3
httpd 2.4.8.dev 中文文档In this recipe, we discuss how to block persistent requests from a particular robot, or user agent. The standard for robot exclusion defines a file, /robots.txt that specifies those portions of your website protected, and the client USER_AGENT that identifies the malicious or persistent robot. In this example, we are blocking a robot called NameOfBadRobot from a location /secret/files. You may also specify an header. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot Some additional examples: BrowserMatch ^Mozilla forms jpeg=yes browser=netscape0 码力 | 2404 页 | 1.84 MB | 1 年前3
httpd 2.4.23 中文文档In this recipe, we discuss how to block persistent requests from a particular robot, or user agent. The standard for robot exclusion defines a file, /robots.txt that specifies those portions of your website protected, and the client USER_AGENT that identifies the malicious or persistent robot. In this example, we are blocking a robot called NameOfBadRobot from a location /secret/files. You may also specify an request header. The following two lines have the same effect: BrowserMatch Robot is_a_robot SetEnvIf User-Agent Robot is_a_robot Some additional examples: BrowserMatch ^Mozilla forms jpeg=yes browser=netscape0 码力 | 2559 页 | 2.11 MB | 1 年前3
httpd 2.4.9 中文文档In this recipe, we discuss how to block persistent requests from a particular robot, or user agent. The standard for robot exclusion defines a file, /robots.txt that specifies those portions of your website protected, and the client USER_AGENT that identifies the malicious or persistent robot. In this example, we are blocking a robot called NameOfBadRobot from a location /secret/files. You may also specify an header. The following two lines have the same effect: BrowserMatchNoCase Robot is_a_robot SetEnvIfNoCase User-Agent Robot is_a_robot Some additional examples: BrowserMatch ^Mozilla forms jpeg=yes browser=netscape0 码力 | 2398 页 | 1.84 MB | 1 年前3
共 23 条
- 1
- 2
- 3













