Istio-redirector: the way
to go to manage
thousands of HTTP
redirections#IstioCon Istio-redirector: the way to go to manage thousands of HTTP redirections Etienne Fontaine (@etifontaine) #IstioCon Istio-redirector 301-redirection from /bus/routes/bruxelles/lille [...] spec: gateways: - istio-system/istio-ingressgateway hosts: - www.blablacar.fr http: - match: - uri: exact: /co2 redirect: uri: /blablalife/lp/zeroemptyseats running in production without any impact on performances! #IstioCon Check it out on Github https://github.com/blablacar/istio-redirector And leave a star ? #IstioCon How can we use istio-redirector0 码力 | 13 页 | 1.07 MB | 1 年前3
Istio audit report - ADA Logics - 2023-01-30 - v1.0NewHandler in an http.MaxBytesHandler.” John found that when the recommended MaxBytesHandler was used, the request body was not fully consumed, meaning that when a server attempts to read HTTP2 frames from from the connection it will instead be reading the body. As such, the MaxBytesHandler introduces an http request smuggling attack vector. The issue was disclosed to the Golang security team who fixed the repository Repository https://github.com/istio/istio Language Golang Istio API definitions Repository https://github.com/istio/api Language Golang Istio documentation Repository https://github.com/istio/istio0 码力 | 55 页 | 703.94 KB | 1 年前3
Apache Kyuubi 1.6.1 Documentationmulti-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™ [http://spark.apache.org/]. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both data processing e.g. ETL multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server [http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server]0 码力 | 401 页 | 5.42 MB | 1 年前3
Apache Kyuubi 1.6.0 Documentationmulti-tenant JDBC interface for large-scale data processing and analytics, built on top of Apache Spark™ [http://spark.apache.org/]. In general, the complete ecosystem of Kyuubi falls into the hierarchies shown each layer loosely coupled to the other. For example, you can use Kyuubi, Spark and Apache Iceberg [https://iceberg.apache.org/] to build and manage Data Lake with pure SQL for both data processing e.g. ETL multi-tenancy, and this is why we want to create this project despite that the Spark Thrift JDBC/ODBC server [http://spark.apache.org/docs/latest/sql-distributed-sql-engine.html#running-the-thrift-jdbcodbc-server]0 码力 | 391 页 | 5.41 MB | 1 年前3
Dapr september 2023 security audit reportwere in scope of the audit. Repository https://github.com/dapr/dapr Language Go Repository https://github.com/dapr/components-contrib Language Go Repository https://github.com/dapr/kit Language Go 4 Dapr runs as a separate sidecar process. In both cases, the application and Dapr interact through HTTP or gRPC calls: If the user has multiple applications running with Dapr, each has a sidecar next to building microservice-based applications. The user application interacts with these components via HTTP or gRPC API endpoints. Dapr groups components into building blocks; A building block is a high-level0 码力 | 47 页 | 1.05 MB | 1 年前3
Apache Karaf Container 4.x - DocumentationCommands 4.16.3. obr:start 4.16.4. JMX ObrMBean 4.16.5. Apache Karaf Cave 4.17. Enterprise 4.17.1. Http Service 4.17.2. WebContainer (JSP/Servlet) 4.17.3. Naming (JNDI) 4.17.4. Transaction (JTA) 4.17 Management using JMX 4.18.1. Connecting 4.18.2. Configuration 4.18.3. MBeans 4.18.4. RBAC 4.18.5. JMX-HTTP bridge with Jolokia 4.18.6. Apache Karaf Decanter 4.19. WebConsole 4.19.1. Installation 4.19.2 Custom JMX MBean 5.23. Working with profiles 5.24. Security & JAAS 5.25. Servlet 5.26. WAR 5.27. HTTP Resources 5.28. REST service 5.29. SOAP service 5.30. Websocket 5.31. Scheduler 5.32. Quick example0 码力 | 370 页 | 1.03 MB | 1 年前3
Docker 从入门到实践 0.9.0(2017-12-31)GitHub 上提交 Pull Request ,添加标签,并邀请维护者进行 Review 。 定期使用项目仓库内容更新自己仓库内容。 $ git remote add upstream https://github.com/yeasy/docker_practice $ git fetch upstream $ git rebase upstream/master $ git push 使用 APT 安装 Ubuntu 25 由于 apt 源使用 HTTPS 以确保软件下载过程中不被篡改。因此,我们首先需要添加使用 HTTPS 传输的软件包以及 CA 证书。 $ sudo apt-get update $ sudo apt-get install \ apt-transport-https \ ca-certificates \ curl 为了确认所下载软件包的合法性,需要添加软件源的 GPG 密钥。 $ curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add - # 官方源 # $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key0 码力 | 370 页 | 6.73 MB | 1 年前3
Apache Kyuubi 1.9.0-SNAPSHOT DocumentationApache Spark [https://spark.apache.org/], Flink [https://flink.apache.org/], Doris [https://doris.apache.org/], Hive [https://hive.apache.org/], Trino [https://trino.io/], and StarRocks [https://www.starrocks components above to build a modern data stack. For example, you can use Kyuubi, Spark and Iceberg [https://iceberg.apache.org/] to build and manage Data Lakehouse with pure SQL for both data processing, HiveServer2-compatible interface that allows end users to use a thrift client(cross-language support, both tcp and http), a Java Database Connectivity(JDBC) interface over thrift, or an Open Database Connectivity (ODBC)0 码力 | 405 页 | 4.96 MB | 1 年前3
Apache Kyuubi 1.7.0-rc0 Documentationframeworks, e.g., Apache Spark [https://spark.apache.org/], Flink [https://flink.apache.org/], Doris [https://doris.apache.org/], Hive [https://hive.apache.org/], and Trino [https://trino.io/], etc, to query components above to build a modern data stack. For example, you can use Kyuubi, Spark and Iceberg [https://iceberg.apache.org/] to build and manage Data Lakehouse with pure SQL for both data processing, HiveServer2-compatible interface that allows end users to use a thrift client(cross-language support, both tcp and http), a Java Database Connectivity(JDBC) interface over thrift, or an Open Database Connectivity (ODBC)0 码力 | 404 页 | 5.25 MB | 1 年前3
Apache Kyuubi 1.8.1 Documentationframeworks, e.g., Apache Spark [https://spark.apache.org/], Flink [https://flink.apache.org/], Doris [https://doris.apache.org/], Hive [https://hive.apache.org/], and Trino [https://trino.io/], etc, to query components above to build a modern data stack. For example, you can use Kyuubi, Spark and Iceberg [https://iceberg.apache.org/] to build and manage Data Lakehouse with pure SQL for both data processing, HiveServer2-compatible interface that allows end users to use a thrift client(cross-language support, both tcp and http), a Java Database Connectivity(JDBC) interface over thrift, or an Open Database Connectivity (ODBC)0 码力 | 405 页 | 5.28 MB | 1 年前3
共 644 条
- 1
- 2
- 3
- 4
- 5
- 6
- 65













