summaryrefslogtreecommitdiff
path: root/methods
AgeCommit message (Collapse)Author
2017-07-01http: A response with Content-Length: 0 has no contentJulian Andres Klode
APT considered any response with a Content-Length to have a body, even if the value of the header was 0. A 0 length body however, is equal to no body.
2017-07-01Make Verify-Host and Verify-Peer independent againJulian Andres Klode
We can actually just pass null as a hostname, so let's just do that when Verify-Host is set to false.
2017-06-30TLS support: Error out on unsupported curl optionsJulian Andres Klode
Silently ignoring the options might be a security issue, so produce an error instead.
2017-06-30Improve closing the TLS connectionJulian Andres Klode
If gnutls_session_bye() exited with an error, we never closed the underlying file descriptor, causing the method to think the connection was still open. This caused problems especially in test-partial-file-support where we checked that a "complete" file and an incomplete file work. The first GET returns a 416 with Connection: close, and the next GET request then accidentally reads the body of the 416 as the header for its own request.
2017-06-30Switch to 'http' as the default https methodJulian Andres Klode
The old curl based method is still available as 'curl', 'curl+http', and 'curl+https'.
2017-06-30http: Add support for https:// proxiesJulian Andres Klode
HTTPS proxies just require unwrapping the TLS layer at the proxy connection, that's easy, and of course sending proxy-specific headers that are sent on "http" proxies.
2017-06-30http: Add support for CONNECT proxying to HTTPS locationsJulian Andres Klode
Proxying HTTPS traffic requires the proxy providing the CONNECT method. This implements the client side of it, although it is a bit hacky. HTTP connect is a normal HTTP CONNECT request, followed by a normal HTTP response, just that the body of the response is the TCP stream of the target host. We use a special wrapper in case there are data bytes in the header packets - in that case, the bytes are stored in a buffer and the buffer will be drained first, afterwards the connection continues directly with the TCP stream (with one more vcall). Also: Do not send full URI to https destinations when proxying, as we are directly interfacing with the destination data stream.
2017-06-30Allow running the TLS stack on any lower connectionJulian Andres Klode
This is especially needed if we use an HTTPS proxy to CONNECT to an HTTPS URI, as we run TLS-inside-TLS then.
2017-06-30Reset failure reason when connection was successfulJulian Andres Klode
When APT was trying multiple addresses, any later error somewhere else would be reported with ConnectionRefused or ConnectionTimedOut as the FailReason because that was set by early connect attempts. This causes APT to handle the failures differently, leading to some weirdly breaking test cases (like the changed one). Add debugging to the previously failing test case so we can find out when something goes wrong there again.
2017-06-30Don't read CaInfo if not specified (missing else)Julian Andres Klode
This fixes a regression from ~alpha2. Closes: #866559 Gbp-Dch: Full
2017-06-29http: Only use system CA store if CaInfo is not setJulian Andres Klode
It turns out that curl only sets the system trust store if the CaInfo option is not set, so let's do the same here.
2017-06-29Improve error message if system CA store is emptyJulian Andres Klode
Tell the user to install ca-certificates. Closes: #866377
2017-06-29use port from SRV record instead of initial portDavid Kalnischkies
An SRV record includes a portnumber to use with the host given, but apt was ignoring the portnumber and instead used either the port given by the user for the initial host or the default port for the service. In practice the service usually runs on another host on the default port, so it tends to work as intended and even if not and apt can't get a connection there it will gracefully fallback to contacting the initial host with the right port, so its a user invisible bug most of the time.
2017-06-28support tor+https being handled by httpDavid Kalnischkies
The apt-transport-tor package operates via simple symlinks which can result in 'http' being called as 'tor+https', so it must pick up the right configuration pieces and trigger https support also in plus names.
2017-06-28Introduce Acquire::AllowTLS to turn off TLS supportJulian Andres Klode
As requested by Henrique de Moraes Holschuh, here comes an option to disable TLS support. If the option is set to false, the internal TLS layer is disabled.
2017-06-28Fix https->http redirect issuesDavid Kalnischkies
Gbp-Dch: ignore
2017-06-28methods: http: Drain pending data before selectingJulian Andres Klode
GnuTLS can already have data pending in its buffers, we need to to drain that first otherwise select() might block indefinitely. Gbp-Dch: ignore
2017-06-28Allow building without curlJulian Andres Klode
This makes testing easier and prepares us for the transition.
2017-06-28methods: Add HTTPS support to http method, using GnuTLSJulian Andres Klode
The http method will eventually replace the curl-based https method, but for now, this is an opt-in experiment that can be enabled by setting Dir::Bin::Methods::https to "http". Known issues: - We do not support HTTPS proxies yet - We do not support proxying HTTPS connections yet (CONNECT) - IssuerCert and SslForceVersion are unsupported Gbp-Dch: Full
2017-06-28methods: connect: Switch from int fds to new MethodFdJulian Andres Klode
Use std::unique_ptr<MethodFd> everywhere we used an integer-based file descriptor before. This allows us to implement stuff like TLS support easily. Gbp-Dch: ignore
2017-06-28methods: connect: Change PkgAcqMethod to aptMethodJulian Andres Klode
This will allow us to access ConfigFind() and stuff which makes it possible for us to implement TLS support. Gbp-Dch: ignore
2017-06-26deal with 3xx httpcodes as required by HTTP/1.1 specDavid Kalnischkies
An unknown code should be handled the same as the x00 code of this group, but for redirections we used to treat 300 (and a few others) as an error while unknown codes were considered redirections. Instead we check now explicitly for the redirection codes we support for redirecting (and add the 308 defined in RFC 7538) to avoid future problems if new 3xx codes are added expecting certain behaviours. Potentially strange would have been e.g. "305 Use Proxy" sending a Location for the proxy to use – which wouldn't have worked and resulted in an error anyhow, but probably confused users in the process.
2017-06-26avoid changing directory in mirror methodDavid Kalnischkies
2017-06-26Annotate intended switch fall through in httpsDavid Kalnischkies
Reported-By: gcc-7 Gbp-Dch: Ignore
2017-01-24basehttp: Only read Content-Range on 416 and 206 responsesJulian Andres Klode
This fixes issues with sourceforge where the redirector includes such a Content-Range in a 302 redirect. Since we do not really know what file is meant in a redirect, let's just ignore it for all responses other than 416 and 206. Maybe we should also get rid of the other errors, and just ignore the field in those cases as well? LP: #1657567
2017-01-19fix various typos reported by spellintianDavid Kalnischkies
Most of them in (old) code comments. The two instances of user visible string changes the po files of the manpages are fixed up as well. Gbp-Dch: Ignore Reported-By: spellintian
2017-01-19stop rred from leaking debug messages on recovered errorsDavid Kalnischkies
rred can fail for a plentory of reasons, but its failure is usually recoverable (Ign lines) so it shouldn't leak unrequested debug messages to an observing user. Closes: #850759
2017-01-17https: Quote path in URL before passing it to curlJulian Andres Klode
Curl requires URLs to be urlencoded. We are however giving it undecoded URLs. This causes it go completely nuts if there is a space in the URI, producing requests like: GET /a file HTTP/1.1 which the servers then interpret as a GET request for "/a" with HTTP version "file" or some other non-sense. This works around the issue by encoding the path component of the URL. I'm not sure if we should encode other parts of the URL as well, this one seems to do the trick for the actual issue at hand. A more correct fix is to avoid the dequoting and (re-)quoting of URLs when a redirect occurs / a new request is sent. That's been on the radar for probably a year or two now, but nobody bothered implementing that yet. LP: #1651923
2016-12-31rename ServerMethod to BaseHttpMethodDavid Kalnischkies
This 'method' is the abstract base for http and https and should as such be called out like this rather using an easily confused name. Gbp-Dch: Ignore
2016-12-31separating state variables regarding server/requestDavid Kalnischkies
Having a Reset(bool) method to partially reset certain variables like the download size always were strange, so this commit splits the ServerState into an additional RequestState living on the stack for as long as we deal with this request causing an automatic "reset". There is much to do still to make this code look better, but this is a good first step which compiles cleanly and passes all tests, so keeping it as history might be beneficial and due to avoiding explicit memory allocations it ends up fixing a small memory leak in https, too. Closes: #440057
2016-12-08Honour Acquire::ForceIPv4/6 in the https transportLukasz Kawczynski
2016-11-25gpgv: Untrust SHA1, RIPE-MD/160, but allow downgrading to weakJulian Andres Klode
Change the trust level check to allow downgrading an Untrusted option to weak (APT::Hashes::SHA1::Weak "yes";), so it prints a warning instead of an error; and change the default values for SHA1 and RIPE-MD/160 from Weak to Untrusted.
2016-11-24report apt-key errors via status-fd messagesDavid Kalnischkies
We report warnings from apt-key this way already since 29c590951f812d9e9c4f17706e34f2c3315fb1f6, so reporting errors seems like a good addition. Most of those errors aren't really from apt-key through, but from the code setting up and actually calling it which used to just print to stderr which might or might not intermix them with (other) progress lines in update calls. Having them as proper error messages in the system means that the errors are actually collected later on for the list instead of ending up with our relatively generic but in those cases bogus hint regarding "is gpgv installed?". The effective difference is minimal as the errors apply mostly to systems which have far worse problems than a not as nice looking error message, which makes this pretty hard to test – but at least now the hint that your system is broken can be read in proper order (= there aren't many valid cases in which the permissions of /tmp are messed up…). LP: #1522988
2016-11-11http: skip connection cleanup if we close it anyhowDavid Kalnischkies
Suggested in #529794
2016-11-11http: clear content before reporting the failureEdgar Fuß
[Comment from commiter:] I have the feeling that the issue itself is fixed for a while already as nowadays we have testcases involving a webserver closing the connection on error (look for "closeOnError") and no even remotely recent reports about it, but moving the content clearance above the failure report is a valid change and shouldn't hurt. Closes: #465572
2016-11-10improve SOCKS error messages for http slightlyDavid Kalnischkies
The 0.0.0.0:0 tor reports is pretty useless by itself, but even if an IP would be reported it is better to show the user the hostname we wanted the proxy to connect to in the same error message. We improve upon it further by looking for this bind address in particular and remap error messages slightly to give users a better chance of figuring out what went wrong. Upstream Tor can't do that as it is technically wrong.
2016-09-04abort connection on '.' target replies in SRVDavid Kalnischkies
Commit 3af3ac2f5ec007badeded46a94be2bd06b9917a2 (released in 1.3~pre1) implements proper fallback for SRV, but that works actually too good as the RFC defines that such an SRV record should indicate that the server doesn't provide this service and apt should respect this. The solution is hence to fail again as requested even if that isn't what the user (and perhaps even the server admins) wanted. At least we will print a message now explicitly mentioning SRV to point people in the right direction. Reported-In: https://bugs.kali.org/view.php?id=3525 Reported-By: Raphaël Hertzog
2016-09-01support long keyid and fingerprint in gpgv's GOODSIGDavid Kalnischkies
In gpgv1 GOODSIG (and the other messages of status-fd) are documented as sending the long keyid. In gpgv2 it is documented to be either long keyid or the fingerprint. At the moment it is still the long keyid, but the documentation hints at the possibility of changing this. We care about this for Signed-By support as we detect this way if the right fingerprint has signed this file (or not). The check itself is done via VALIDSIG which always is a fingerprint, but there must also be a GOODSIG (as expired sigs are valid, too) found to be accepted which wouldn't be found in the fingerprint-case and the signature hence refused.
2016-09-01try not to call memcpy with length 0 in hash calculationsDavid Kalnischkies
memcpy is marked as nonnull for its input, but ignores the input anyhow if the declared length is zero. Our SHA2 implementations do this as well, it was "just" MD5 and SHA1 missing, so we add the length check here as well as along the callstack as it is really pointless to do all these method calls for "nothing". Reported-By: gcc -fsanitize=undefined
2016-08-27Merge branch 'portability/freebsd'Julian Andres Klode
2016-08-26methods/connect.cc: Only use AI_IDN if definedJulian Andres Klode
Gbp-Dch: ignore
2016-08-26CMake: Do not use -lresolv if res_init exists in libcJulian Andres Klode
Gbp-Dch: ignore
2016-08-25show apt-key warnings in apt updateDavid Kalnischkies
In 105503b4b470c124bc0c271bd8a50e25ecbe9133 we got a warning implemented for unreadable files which greatly improves the behavior of apt update already as everything will work as long as we don't need the keys included in these files. The behavior if they are needed is still strange through as update will fail claiming missing keys and a manual test (which the user will likely perform as root) will be successful. Passing the new warning generated by apt-key through to apt is a bit strange from an interface point of view, but basically duplicating the warning code in multiple places doesn't feel right either. That means we have no translation for the message through as apt-key has no i18n yet. It also means that if the user has a bunch of sources each of them will generate a warning for each unreadable file which could result in quite a few duplicated warnings, but "too many" is better than none. Closes: 834973
2016-08-17methods: read config in most to least specific orderDavid Kalnischkies
The implementation of the generic config fallback did the fallback in the wrong order so that the least specific option wasn't the last value picked but in fact the first one… doh! So in the bugreports case http -> https -> http::<hostname> -> https::<hostname> while it should have been the reverse as before. Regression-In: 30060442025824c491f58887ca7369f3c572fa57 Closes: 834642
2016-08-16don't try pipelining if server closes connectionsDavid Kalnischkies
If a server closes a connection after sending us a file that tends to mean that its a type of server who always closes the connection – it is therefore relatively pointless to try pipelining with it even if it isn't a problem by itself: apt is just restarting the pipeline each time after it got served one file and the connection is closed. The problem starts if one or more proxies are between the server and apt and they disagree about how the connection should be as in the bugreporters case where the responses apt gets contain both Keep-Alive and Proxy-Connection headers (which apt both ignores) indicating a proxy is trying to keep a connection open while the response also contains "Connection: close" indicating the opposite which apt understands and respects as it is required to do. We avoid stepping into this abyss by not performing pipelining anymore if we got a respond with the indication to close connection if the response was otherwise a success – error messages are sent by some servers via this method as their pages tend to be created dynamically and hence their size isn't known a priori to them. Closes: #832113
2016-08-16don't sent Range requests if we know its not acceptedDavid Kalnischkies
If the server told us in a previous request that it isn't supporting Ranges with bytes via an Accept-Ranges header missing bytes, we don't try to formulate requests using Ranges.
2016-08-16reorganize server-states resetting in http/httpsDavid Kalnischkies
We keep various information bits about the server around, some only effecting the currently handled file (like sizes) while others should be persistent (like pipeline detections). http used to reset all file-related manually, which is a bit silly if we already have a Reset() method – which does reset all through –, so extending it with a parameter for reuse and calling it from https too (as this was previously resetting by just creating a new state struct – it uses no value of the persistent state-keeping yet as it supports no pipelining). Gbp-Dch: Ignore
2016-08-13http(s): allow empty values for header fieldsDavid Kalnischkies
It seems completely pointless from a server-POV to sent empty header fields, so most of them don't do it (simply proven by this limitation existing since day one) – but it is technically allowed by the RFC as the surounding whitespaces are optional and Github seems to like sending "X-Geo-Block-List:\r\n" since recently (bug reports in other http clients indicate July) at least sometimes as the reporter claims to have seen it on https only even through it can happen with both. Closes: 834048
2016-08-11http: auto-configure for local Tor proxy if called as 'tor'David Kalnischkies
With apts http transport supporting socks5h proxies and all the work in terms of configuration of methods based on the name it is called with it becomes surprisingly easy to implement Tor support equally (and perhaps even a bit exceeding) what is available currently in apt-transport-tor. How this will turn out to be handled packaging wise we will see in https://lists.debian.org/deity/2016/08/msg00012.html , but until this is resolved we can add the needed support without actively enabling it for now, so that this can be tested better.
2016-08-11block direct connections to .onion domains (RFC7687)David Kalnischkies
Doing a direct connect to an .onion address (if you don't happen to use it as a local domain, which you shouldn't) is bound to fail and does leak the information that you do use Tor and which hidden service you wanted to connect to to a DNS server. Worse, if the DNS is poisoned and actually resolves tricking a user into believing the setup would work correctly… This does block also the usage of wrappers like torsocks with apt, but with native support available and advertised in the error message this shouldn't really be an issue. Inspired-by: https://bugzilla.mozilla.org/show_bug.cgi?id=1228457