Apache Karaf Decanter 2.x - Documentation2. Custom Collector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.2.1. Event Driven Collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.2.2. Polled Collector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . the kind of notification that you want. Apache Karaf Decanter provides Karaf features for each collector, appender, alerter. The first thing to do is to add the Decanter features repository in Karaf:0 码力 | 64 页 | 812.01 KB | 1 年前3
Apache Karaf Decanter 1.x - Documentation1.4.2. Alerters 2. Developer Guide 2.1. Architecture 2.2. Custom Collector 2.2.1. Event Driven Collector 2.2.2. Polled Collector 2.3. Custom Appender 2.4. Custom SLA Alerter 1. User Guide 1.1. the kind of notification that you want. Apache Karaf Decanter provides Karaf features for each collector, appender, SLA alerter. The first thing to do is to add the Decanter features repository in Karaf: collectors harvest the monitoring data, and send this data to the Decanter appenders. Two kinds of collector are available: • Event Driven Collectors react to events and "broadcast" the data to the appenders0 码力 | 67 页 | 213.16 KB | 1 年前3
Scrapy 0.9 Documentationper spider. It’s called the Stats Collector, and it’s a singleton which can be imported and used quickly, as illustrated by the examples in the Common Stats Collector uses section below. The stats collection is enabled by default but can be disabled through the STATS_ENABLED setting. However, the Stats Collector is always available, so you can always import it in your module and use its API (to increment or simplifying the stats collector usage: you should spend no more than one line of code for collecting stats in your spider, Scrapy extension, or whatever code you’re using the Stats Collector from. Another feature0 码力 | 204 页 | 447.68 KB | 1 年前3
Scrapy 0.9 Documentationper spider. It’s called the Stats Collector, and it’s a singleton which can be imported and used quickly, as illustrated by the examples in the Common Stats Collector uses section below. The stats collection is enabled by default but can be disabled through the STATS_ENABLED setting. However, the Stats Collector is always available, so you can always import it in your module and use its API (to increment or simplifying the stats collector usage: you should spend no more than one line of code for collecting stats in your spider, Scrapy extension, or whatever code you’re using the Stats Collector from. Another feature0 码力 | 156 页 | 764.56 KB | 1 年前3
Dependency Injection in C++consolidation for DI class Builder { public: virtual void build(const Tick& tick) const { Data info = collector_.getData(tick, bid_, ask_, localAsk_, localBid_); //... } protected: // Sides info std::optional: public Builder { public: virtual void build(const Tick& tick) const override { Data info = collector_.getData(tick, bid_, ask_, localAsk_, localBid_, bidBroker_, std::optional askYield_; };Bloomberg 73 Data structure/Parameter consolidation for DI class Collector { public: // Basic Data Data getData(const Tick&, const std::optional & bid, const std::optional & 0 码力 | 106 页 | 1.76 MB | 6 月前3
Scrapy 0.12 Documentationper spider. It’s called the Stats Collector, and it’s a singleton which can be imported and used quickly, as illustrated by the examples in the Common Stats Collector uses section below. The stats collection is enabled by default but can be disabled through the STATS_ENABLED setting. However, the Stats Collector is always available, so you can always import it in your module and use its API (to increment or simplifying the stats collector usage: you should spend no more than one line of code for collecting stats in your spider, Scrapy extension, or whatever code you’re using the Stats Collector from. Another feature0 码力 | 177 页 | 806.90 KB | 1 年前3
Scrapy 0.12 Documentationper spider. It’s called the Stats Collector, and it’s a singleton which can be imported and used quickly, as illustrated by the examples in the Common Stats Collector uses section below. The stats collection is enabled by default but can be disabled through the STATS_ENABLED setting. However, the Stats Collector is always available, so you can always import it in your module and use its API (to increment or simplifying the stats collector usage: you should spend no more than one line of code for collecting stats in your spider, Scrapy extension, or whatever code you’re using the Stats Collector from. Another feature0 码力 | 228 页 | 462.54 KB | 1 年前3
Scrapy 0.16 DocumentationStats Collector, and can be accessed through the stats attribute of the Crawler API, as illustrated by the examples in the Common Stats Collector uses section below. However, the Stats Collector is always simplifying the stats collector usage: you should spend no more than one line of code for collecting stats in your spider, Scrapy extension, or whatever code you’re using the Stats Collector from. Another feature feature of the Stats Collector is that it’s very efficient (when enabled) and extremely efficient (almost unno- ticeable) when disabled. The Stats Collector keeps a stats table per open spider which is0 码力 | 203 页 | 931.99 KB | 1 年前3
Scrapy 0.18 DocumentationStats Collector, and can be accessed through the stats attribute of the Crawler API, as illustrated by the examples in the Common Stats Collector uses section below. However, the Stats Collector is always simplifying the stats collector usage: you should spend no more than one line of code for collecting stats in your spider, Scrapy extension, or whatever code you’re using the Stats Collector from. Another feature feature of the Stats Collector is that it’s very efficient (when enabled) and extremely efficient (almost unno- ticeable) when disabled. The Stats Collector keeps a stats table per open spider which is0 码力 | 201 页 | 929.55 KB | 1 年前3
Scrapy 0.22 DocumentationStats Collector, and can be accessed through the stats attribute of the Crawler API, as illustrated by the examples in the Common Stats Collector uses section below. However, the Stats Collector is always simplifying the stats collector usage: you should spend no more than one line of code for collecting stats in your spider, Scrapy extension, or whatever code you’re using the Stats Collector from. Another feature feature of the Stats Collector is that it’s very efficient (when enabled) and extremely efficient (almost unno- ticeable) when disabled. The Stats Collector keeps a stats table per open spider which is0 码力 | 199 页 | 926.97 KB | 1 年前3
共 544 条
- 1
- 2
- 3
- 4
- 5
- 6
- 55













