Age | Commit message (Collapse) | Author |
|
This probably makes more sense if Verify-Peer is set to off.
|
|
This should make it easier to figure out what was
going on.
|
|
APT considered any response with a Content-Length to have a
body, even if the value of the header was 0. A 0 length body
however, is equal to no body.
|
|
We can actually just pass null as a hostname, so let's just
do that when Verify-Host is set to false.
|
|
Silently ignoring the options might be a security issue,
so produce an error instead.
|
|
If gnutls_session_bye() exited with an error, we never closed
the underlying file descriptor, causing the method to think the
connection was still open. This caused problems especially in
test-partial-file-support where we checked that a "complete"
file and an incomplete file work. The first GET returns a 416
with Connection: close, and the next GET request then accidentally
reads the body of the 416 as the header for its own request.
|
|
The old curl based method is still available as 'curl',
'curl+http', and 'curl+https'.
|
|
HTTPS proxies just require unwrapping the TLS layer at the proxy
connection, that's easy, and of course sending proxy-specific
headers that are sent on "http" proxies.
|
|
Proxying HTTPS traffic requires the proxy providing the
CONNECT method. This implements the client side of it,
although it is a bit hacky.
HTTP connect is a normal HTTP CONNECT request, followed
by a normal HTTP response, just that the body of the
response is the TCP stream of the target host.
We use a special wrapper in case there are data bytes
in the header packets - in that case, the bytes are
stored in a buffer and the buffer will be drained first,
afterwards the connection continues directly with the
TCP stream (with one more vcall).
Also: Do not send full URI to https destinations when proxying,
as we are directly interfacing with the destination data stream.
|
|
This is especially needed if we use an HTTPS proxy to CONNECT
to an HTTPS URI, as we run TLS-inside-TLS then.
|
|
When APT was trying multiple addresses, any later error
somewhere else would be reported with ConnectionRefused
or ConnectionTimedOut as the FailReason because that
was set by early connect attempts. This causes APT to
handle the failures differently, leading to some weirdly
breaking test cases (like the changed one).
Add debugging to the previously failing test case so
we can find out when something goes wrong there again.
|
|
This fixes a regression from ~alpha2.
Closes: #866559
Gbp-Dch: Full
|
|
It turns out that curl only sets the system trust store if
the CaInfo option is not set, so let's do the same here.
|
|
Tell the user to install ca-certificates.
Closes: #866377
|
|
An SRV record includes a portnumber to use with the host given, but apt
was ignoring the portnumber and instead used either the port given by
the user for the initial host or the default port for the service.
In practice the service usually runs on another host on the default
port, so it tends to work as intended and even if not and apt can't get
a connection there it will gracefully fallback to contacting the initial
host with the right port, so its a user invisible bug most of the time.
|
|
The apt-transport-tor package operates via simple symlinks which can
result in 'http' being called as 'tor+https', so it must pick up the
right configuration pieces and trigger https support also in plus names.
|
|
As requested by Henrique de Moraes Holschuh, here comes
an option to disable TLS support. If the option is set
to false, the internal TLS layer is disabled.
|
|
Gbp-Dch: ignore
|
|
GnuTLS can already have data pending in its buffers, we need
to to drain that first otherwise select() might block
indefinitely.
Gbp-Dch: ignore
|
|
This makes testing easier and prepares us for the
transition.
|
|
The http method will eventually replace the curl-based
https method, but for now, this is an opt-in experiment
that can be enabled by setting Dir::Bin::Methods::https
to "http".
Known issues:
- We do not support HTTPS proxies yet
- We do not support proxying HTTPS connections yet (CONNECT)
- IssuerCert and SslForceVersion are unsupported
Gbp-Dch: Full
|
|
Use std::unique_ptr<MethodFd> everywhere we used an
integer-based file descriptor before. This allows us
to implement stuff like TLS support easily.
Gbp-Dch: ignore
|
|
This will allow us to access ConfigFind() and stuff which makes
it possible for us to implement TLS support.
Gbp-Dch: ignore
|
|
An unknown code should be handled the same as the x00 code of this
group, but for redirections we used to treat 300 (and a few others)
as an error while unknown codes were considered redirections.
Instead we check now explicitly for the redirection codes we support for
redirecting (and add the 308 defined in RFC 7538) to avoid future
problems if new 3xx codes are added expecting certain behaviours.
Potentially strange would have been e.g. "305 Use Proxy" sending a
Location for the proxy to use – which wouldn't have worked and resulted
in an error anyhow, but probably confused users in the process.
|
|
|
|
Reported-By: gcc-7
Gbp-Dch: Ignore
|
|
This fixes issues with sourceforge where the redirector includes
such a Content-Range in a 302 redirect. Since we do not really know
what file is meant in a redirect, let's just ignore it for all
responses other than 416 and 206.
Maybe we should also get rid of the other errors, and just ignore
the field in those cases as well?
LP: #1657567
|
|
Most of them in (old) code comments. The two instances of user visible
string changes the po files of the manpages are fixed up as well.
Gbp-Dch: Ignore
Reported-By: spellintian
|
|
rred can fail for a plentory of reasons, but its failure is usually
recoverable (Ign lines) so it shouldn't leak unrequested debug messages
to an observing user.
Closes: #850759
|
|
Curl requires URLs to be urlencoded. We are however giving it
undecoded URLs. This causes it go completely nuts if there is
a space in the URI, producing requests like:
GET /a file HTTP/1.1
which the servers then interpret as a GET request for "/a" with
HTTP version "file" or some other non-sense.
This works around the issue by encoding the path component of
the URL. I'm not sure if we should encode other parts of the URL
as well, this one seems to do the trick for the actual issue at
hand.
A more correct fix is to avoid the dequoting and (re-)quoting
of URLs when a redirect occurs / a new request is sent. That's
been on the radar for probably a year or two now, but nobody
bothered implementing that yet.
LP: #1651923
|
|
This 'method' is the abstract base for http and https and should as such
be called out like this rather using an easily confused name.
Gbp-Dch: Ignore
|
|
Having a Reset(bool) method to partially reset certain variables like
the download size always were strange, so this commit splits the
ServerState into an additional RequestState living on the stack for as
long as we deal with this request causing an automatic "reset".
There is much to do still to make this code look better, but this is a
good first step which compiles cleanly and passes all tests, so keeping
it as history might be beneficial and due to avoiding explicit memory
allocations it ends up fixing a small memory leak in https, too.
Closes: #440057
|
|
|
|
Change the trust level check to allow downgrading an Untrusted
option to weak (APT::Hashes::SHA1::Weak "yes";), so it prints
a warning instead of an error; and change the default values
for SHA1 and RIPE-MD/160 from Weak to Untrusted.
|
|
We report warnings from apt-key this way already since
29c590951f812d9e9c4f17706e34f2c3315fb1f6, so reporting errors seems like
a good addition. Most of those errors aren't really from apt-key
through, but from the code setting up and actually calling it which used
to just print to stderr which might or might not intermix them with
(other) progress lines in update calls. Having them as proper error
messages in the system means that the errors are actually collected
later on for the list instead of ending up with our relatively generic
but in those cases bogus hint regarding "is gpgv installed?".
The effective difference is minimal as the errors apply mostly to
systems which have far worse problems than a not as nice looking error
message, which makes this pretty hard to test – but at least now the
hint that your system is broken can be read in proper order (= there
aren't many valid cases in which the permissions of /tmp are messed up…).
LP: #1522988
|
|
Suggested in #529794
|
|
[Comment from commiter:] I have the feeling that the issue itself is
fixed for a while already as nowadays we have testcases involving a
webserver closing the connection on error (look for "closeOnError") and
no even remotely recent reports about it, but moving the content
clearance above the failure report is a valid change and shouldn't hurt.
Closes: #465572
|
|
The 0.0.0.0:0 tor reports is pretty useless by itself, but even if an IP
would be reported it is better to show the user the hostname we wanted
the proxy to connect to in the same error message. We improve upon it
further by looking for this bind address in particular and remap error
messages slightly to give users a better chance of figuring out what
went wrong. Upstream Tor can't do that as it is technically wrong.
|
|
Commit 3af3ac2f5ec007badeded46a94be2bd06b9917a2 (released in 1.3~pre1)
implements proper fallback for SRV, but that works actually too good
as the RFC defines that such an SRV record should indicate that the
server doesn't provide this service and apt should respect this.
The solution is hence to fail again as requested even if that isn't what
the user (and perhaps even the server admins) wanted. At least we will
print a message now explicitly mentioning SRV to point people in the
right direction.
Reported-In: https://bugs.kali.org/view.php?id=3525
Reported-By: Raphaël Hertzog
|
|
In gpgv1 GOODSIG (and the other messages of status-fd) are documented as
sending the long keyid. In gpgv2 it is documented to be either long
keyid or the fingerprint. At the moment it is still the long keyid, but
the documentation hints at the possibility of changing this.
We care about this for Signed-By support as we detect this way if the
right fingerprint has signed this file (or not). The check itself is
done via VALIDSIG which always is a fingerprint, but there must also be
a GOODSIG (as expired sigs are valid, too) found to be accepted which
wouldn't be found in the fingerprint-case and the signature hence
refused.
|
|
memcpy is marked as nonnull for its input, but ignores the input anyhow
if the declared length is zero. Our SHA2 implementations do this as
well, it was "just" MD5 and SHA1 missing, so we add the length check
here as well as along the callstack as it is really pointless to do all
these method calls for "nothing".
Reported-By: gcc -fsanitize=undefined
|
|
|
|
Gbp-Dch: ignore
|
|
Gbp-Dch: ignore
|
|
In 105503b4b470c124bc0c271bd8a50e25ecbe9133 we got a warning implemented
for unreadable files which greatly improves the behavior of apt update
already as everything will work as long as we don't need the keys
included in these files. The behavior if they are needed is still
strange through as update will fail claiming missing keys and a manual
test (which the user will likely perform as root) will be successful.
Passing the new warning generated by apt-key through to apt is a bit
strange from an interface point of view, but basically duplicating the
warning code in multiple places doesn't feel right either. That means we
have no translation for the message through as apt-key has no i18n yet.
It also means that if the user has a bunch of sources each of them will
generate a warning for each unreadable file which could result in quite
a few duplicated warnings, but "too many" is better than none.
Closes: 834973
|
|
The implementation of the generic config fallback did the fallback in
the wrong order so that the least specific option wasn't the last value
picked but in fact the first one… doh!
So in the bugreports case http -> https -> http::<hostname> ->
https::<hostname> while it should have been the reverse as before.
Regression-In: 30060442025824c491f58887ca7369f3c572fa57
Closes: 834642
|
|
If a server closes a connection after sending us a file that tends to
mean that its a type of server who always closes the connection – it is
therefore relatively pointless to try pipelining with it even if it
isn't a problem by itself: apt is just restarting the pipeline each
time after it got served one file and the connection is closed.
The problem starts if one or more proxies are between the server and apt
and they disagree about how the connection should be as in the
bugreporters case where the responses apt gets contain both Keep-Alive
and Proxy-Connection headers (which apt both ignores) indicating a
proxy is trying to keep a connection open while the response also
contains "Connection: close" indicating the opposite which apt
understands and respects as it is required to do.
We avoid stepping into this abyss by not performing pipelining anymore
if we got a respond with the indication to close connection if the
response was otherwise a success – error messages are sent by some
servers via this method as their pages tend to be created dynamically
and hence their size isn't known a priori to them.
Closes: #832113
|
|
If the server told us in a previous request that it isn't supporting
Ranges with bytes via an Accept-Ranges header missing bytes, we don't
try to formulate requests using Ranges.
|
|
We keep various information bits about the server around, some only
effecting the currently handled file (like sizes) while others
should be persistent (like pipeline detections). http used to reset all
file-related manually, which is a bit silly if we already have a Reset()
method – which does reset all through –, so extending it with a
parameter for reuse and calling it from https too (as this was
previously resetting by just creating a new state struct – it uses no
value of the persistent state-keeping yet as it supports no pipelining).
Gbp-Dch: Ignore
|
|
It seems completely pointless from a server-POV to sent empty header
fields, so most of them don't do it (simply proven by this limitation
existing since day one) – but it is technically allowed by the RFC as
the surounding whitespaces are optional and Github seems to like sending
"X-Geo-Block-List:\r\n" since recently (bug reports in other http
clients indicate July) at least sometimes as the reporter claims to have
seen it on https only even through it can happen with both.
Closes: 834048
|