Search for packages
| purl | pkg:maven/io.netty/netty-codec-http2@4.1.5.Final |
| Next non-vulnerable version | 4.1.100.Final |
| Latest non-vulnerable version | 4.2.11.Final |
| Risk | 10.0 |
| Vulnerability | Summary | Fixed by |
|---|---|---|
|
VCID-5781-s1ny-q7ey
Aliases: CVE-2023-44487 GHSA-2m7v-gc89-fjqf GHSA-qppj-fm5r-hxr3 GHSA-vx74-f528-fxqg GHSA-xpw8-rcwv-8f8p GMS-2023-3377 VSV00013 |
Affected by 0 other vulnerabilities. |
|
|
VCID-9a4r-nbdk-37fu
Aliases: CVE-2020-11612 GHSA-mm9x-g8pc-w292 |
Denial of Service in Netty The ZlibDecoders in Netty 4.1.x before 4.1.46 allow for unbounded memory allocation while decoding a ZlibEncoded byte stream. An attacker could send a large ZlibEncoded byte stream to the Netty server, forcing the server to allocate all of its free memory to a single decoder. |
Affected by 3 other vulnerabilities. |
|
VCID-hzxz-sqmu-s7e1
Aliases: CVE-2021-21409 GHSA-f256-j965-7f32 |
Possible request smuggling in HTTP/2 due missing validation of content-length ### Impact The content-length header is not correctly validated if the request only use a single Http2HeaderFrame with the endStream set to to true. This could lead to request smuggling if the request is proxied to a remote peer and translated to HTTP/1.1 This is a followup of https://github.com/netty/netty/security/advisories/GHSA-wm47-8v5p-wjpj which did miss to fix this one case. ### Patches This was fixed as part of 4.1.61.Final ### Workarounds Validation can be done by the user before proxy the request by validating the header. |
Affected by 1 other vulnerability. |
|
VCID-ug8h-p8kf-t7e1
Aliases: CVE-2021-21295 GHSA-wm47-8v5p-wjpj |
Possible request smuggling in HTTP/2 due missing validation ### Impact If a Content-Length header is present in the original HTTP/2 request, the field is not validated by `Http2MultiplexHandler` as it is propagated up. This is fine as long as the request is not proxied through as HTTP/1.1. If the request comes in as an HTTP/2 stream, gets converted into the HTTP/1.1 domain objects (`HttpRequest`, `HttpContent`, etc.) via `Http2StreamFrameToHttpObjectCodec `and then sent up to the child channel's pipeline and proxied through a remote peer as HTTP/1.1 this may result in request smuggling. In a proxy case, users may assume the content-length is validated somehow, which is not the case. If the request is forwarded to a backend channel that is a HTTP/1.1 connection, the Content-Length now has meaning and needs to be checked. An attacker can smuggle requests inside the body as it gets downgraded from HTTP/2 to HTTP/1.1. A sample attack request looks like: ``` POST / HTTP/2 :authority:: externaldomain.com Content-Length: 4 asdfGET /evilRedirect HTTP/1.1 Host: internaldomain.com ``` Users are only affected if all of this is `true`: * `HTTP2MultiplexCodec` or `Http2FrameCodec` is used * `Http2StreamFrameToHttpObjectCodec` is used to convert to HTTP/1.1 objects * These HTTP/1.1 objects are forwarded to another remote peer. ### Patches This has been patched in 4.1.60.Final ### Workarounds The user can do the validation by themselves by implementing a custom `ChannelInboundHandler` that is put in the `ChannelPipeline` behind `Http2StreamFrameToHttpObjectCodec`. ### References Related change to workaround the problem: https://github.com/Netflix/zuul/pull/980 |
Affected by 2 other vulnerabilities. |
| Vulnerability | Summary | Aliases |
|---|---|---|
| This package is not known to fix vulnerabilities. | ||