Search for packages
| purl | pkg:maven/org.apache.zookeeper/zookeeper@3.5.10 |
| Vulnerability | Summary | Fixed by |
|---|---|---|
|
VCID-d5ku-8mny-tfed
Aliases: CVE-2023-44981 GHSA-7286-pgfv-vxvh |
Authorization Bypass Through User-Controlled Key vulnerability in Apache ZooKeeper. If SASL Quorum Peer authentication is enabled in ZooKeeper (quorum.auth.enableSasl=true), the authorization is done by verifying that the instance part in SASL authentication ID is listed in zoo.cfg server list. The instance part in SASL auth ID is optional and if it's missing, like 'eve@EXAMPLE.COM', the authorization check will be skipped. As a result an arbitrary endpoint could join the cluster and begin propagating counterfeit changes to the leader, essentially giving it complete read-write access to the data tree. Quorum Peer authentication is not enabled by default. Users are recommended to upgrade to version 3.9.1, 3.8.3, 3.7.2, which fixes the issue. Alternately ensure the ensemble election/quorum communication is protected by a firewall as this will mitigate the issue. See the documentation for more details on correct cluster administration. |
Affected by 1 other vulnerability. Affected by 2 other vulnerabilities. Affected by 4 other vulnerabilities. |
|
VCID-hzxz-sqmu-s7e1
Aliases: CVE-2021-21409 GHSA-f256-j965-7f32 |
Possible request smuggling in HTTP/2 due missing validation of content-length ### Impact The content-length header is not correctly validated if the request only use a single Http2HeaderFrame with the endStream set to to true. This could lead to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1 This is a followup of https://github.com/netty/netty/security/advisories/GHSA-wm47-8v5p-wjpj which did miss to fix this one case. ### Patches This was fixed as part of 4.1.61.Final ### Workarounds Validation can be done by the user before proxy the request by validating the header. |
Affected by 2 other vulnerabilities. |
| Vulnerability | Summary | Aliases |
|---|---|---|
| VCID-g3ff-brt6-vkeh | Jetty Utility Servlets ConcatServlet Double Decoding Information Disclosure Vulnerability Requests to the `ConcatServlet` and `WelcomeFilter` are able to access protected resources within the `WEB-INF` directory. For example a request to the `ConcatServlet` with a URI of `/concat?/%2557EB-INF/web.xml` can retrieve the web.xml file. This can reveal sensitive information regarding the implementation of a web application. This occurs because both `ConcatServlet` and `WelcomeFilter` decode the supplied path to verify it is not within the `WEB-INF` or `META-INF` directories. It then uses this decoded path to call `RequestDispatcher` which will also do decoding of the path. This double decoding allows paths with a doubly encoded `WEB-INF` to bypass this security check. ### Impact This affects all versions of `ConcatServlet` and `WelcomeFilter` in versions before 9.4.41, 10.0.3 and 11.0.3. ### Workarounds If you cannot update to the latest version of Jetty, you can instead deploy your own version of the [`ConcatServlet`](https://github.com/eclipse/jetty.project/blob/4204526d2fdad355e233f6bf18a44bfe028ee00b/jetty-servlets/src/main/java/org/eclipse/jetty/servlets/ConcatServlet.java) and/or the [`WelcomeFilter`](https://github.com/eclipse/jetty.project/blob/4204526d2fdad355e233f6bf18a44bfe028ee00b/jetty-servlets/src/main/java/org/eclipse/jetty/servlets/WelcomeFilter.java) by using the code from the latest version of Jetty. |
CVE-2021-28169
GHSA-gwcr-j4wh-j3cq |
| VCID-ug8h-p8kf-t7e1 | Possible request smuggling in HTTP/2 due missing validation ### Impact If a Content-Length header is present in the original HTTP/2 request, the field is not validated by `Http2MultiplexHandler` as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (`HttpRequest`, `HttpContent`, etc.) via `Http2StreamFrameToHttpObjectCodec `and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. A sample attack request looks like: ``` POST / HTTP/2 :authority:: externaldomain.com Content-Length: 4 asdfGET /evilRedirect HTTP/1.1 Host: internaldomain.com ``` Users are only affected if all of this is `true`: * `HTTP2MultiplexCodec` or `Http2FrameCodec` is used * `Http2StreamFrameToHttpObjectCodec` is used to convert to HTTP/1.1 objects * These HTTP/1.1 objects are forwarded to another remote peer. ### Patches This has been patched in 4.1.60.Final ### Workarounds The user can do the validation by themselves by implementing a custom `ChannelInboundHandler` that is put in the `ChannelPipeline` behind `Http2StreamFrameToHttpObjectCodec`. ### References Related change to workaround the problem: https://github.com/Netflix/zuul/pull/980 |
CVE-2021-21295
GHSA-wm47-8v5p-wjpj |