urllib.parse is a very basic and widely used basic URL parsing function in various applications.
An issue in the urllib.parse component of Python before v3.11 allows attackers to bypass blocklisting methods by supplying a URL that starts with blank characters.
urlparse has a parsing problem when the entire URL starts with blank characters. This problem affects both the parsing of hostname and scheme, and eventually causes any blocklisting methods to fail.
URL Parsing Security *
urlparse() APIs do not perform validation of inputs. They may not raise errors on inputs that other applications consider invalid. They may also succeed on some inputs that might not be considered URLs elsewhere. Their purpose is for practical functionality rather than purity.
Instead of raising an exception on unusual input, they may instead return some component parts as empty strings. Or components may contain more than perhaps they should.
We recommend that users of these APIs where the values may be used anywhere with security implications code defensively. Do some verification within your code before trusting a returned component part. Does that
scheme make sense? Is that a sensible
path? Is there anything strange about that
What constitutes a URL is not universally well defined. Different applications have different needs and desired constraints. For instance the living WHATWG spec describes what user facing web clients such as a web browser require. While RFC 3986 is more general. These functions incorporate some aspects of both, but cannot be claimed compliant with either. The APIs and existing user code with expectations on specific behaviors predate both standards leading us to be very cautious about making API behavior changes.
*Note: This was added as part of the documentation update in https://github.com/python/cpython/pull/102508
Due to this issue, attackers can bypass any domain or protocol filtering method implemented with a blocklist. Protocol filtering failures can lead to arbitrary file reads, arbitrary command execution, SSRF, and other problems. Failure of domain name filtering may lead to re-access of blocked bad or dangerous websites or to failure of CSRF referer type defense, etc.
Because this vulnerability exists in the most basic parsing library, more advanced issues are possible.
The fixes are in the following releases:
fixed in >= 3.12
fixed in 3.11.x >= 3.11.4
fixed in 3.10.x >= 3.10.12
fixed in 3.9.x >= 3.9.17
fixed in 3.8.x >= 3.8.17
fixed in 3.7.x >= 3.7.17
Thanks to the reporter, Yebo Cao for researching and reporting this vulnerability.
This document was written by Ben Koo.
|Date First Published:
|Date Last Updated:
|2023-08-11 22:22 UTC