Age | Commit message (Collapse) | Author |
|
Having the detection handled in specific (http) workers means that a
redirection loop over different hostnames isn't detected. Its also not a
good idea have this implement in each method independently even if it
would work
|
|
Closes: #623443
|
|
Followup of b58e2c7c56b1416a343e81f9f80cb1f02c128e25.
Still a regression of sorts of 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf.
Closes: 832044
|
|
methods/http.cc:640:13: runtime error: reference binding to null pointer
of type 'struct FileFd'
This reference is never used in the cases it has a nullptr, so the
practical difference is non-existent, but its a bug still.
Reported-By: gcc -fsanitize=undefined
|
|
All apt versions support numeric as well as 3-character timezones just
fine and its actually hard to write code which doesn't "accidently"
accepts it. So why change? Documenting the Date/Valid-Until fields in
the Release file is easy to do in terms of referencing the
datetime format used e.g. in the Debian changelogs (policy §4.4). This
format specifies only the numeric timezones through, not the nowadays
obsolete 3-character ones, so in the interest of least surprise we should
use the same format even through it carries a small risk of regression
in other clients (which encounter repositories created with
apt-ftparchive).
In case it is really regressing in practice, the hidden option
-o APT::FTPArchive::Release::NumericTimezone=0
can be used to go back to good old UTC as timezone.
The EDSP and EIPP protocols use this 'new' format, the text interface
used to communicate with the acquire methods does not for compatibility
reasons even if none of our methods would be effected and I doubt any
other would (in these instances the timezone is 'GMT' as that is what
HTTP/1.1 requires). Note that this is only true for apt talking to
methods, (libapt-based) methods talking to apt will respond with the
'new' format. It is therefore strongly adviced to support both also in
method input.
|
|
Most servers who close the connection do not send a content-length as
this is redundant information usually, but some might and while testing
with our server and with 'aptwebserver::response-header::Connection' set
to 'close' I noticed that http hangs after a redirect in such cases, so
if we have the information, just use it instead of discarding it.
|
|
We use a wild mixture of C and C++ ways of generating output, so having
a consistent world-view in both styles sounds like a good idea and
should help in preventing regressions.
|
|
Setting the C++ locale via std::locale::global(std::locale("")); which
would otherwise default to the default C locale (aka: unaffected by
setlocale) effects the formatting of numeric types in IO streams, which
for output for humans is perfectly sensible, but breaks our many text
interfaces used and parsed by us and others without expecting the
numbers to be formatted.
Closes: #825396
|
|
Reported-By: gcc -fsanitize=address -fno-sanitize=vptr
Git-Dch: Ignore
|
|
Conflicts:
apt-pkg/pkgcache.h
debian/changelog
methods/https.cc
methods/server.cc
test/integration/test-apt-download-progress
|
|
The apt http code parses Content-Length and Content-Range. For
both requests the variable "Size" is used and the semantic for
this Size is the total file size. However Content-Length is not
the entire file size for partital file requests. For servers that
send the Content-Range header first and then the Content-Length
header this can lead to globbing of Size so that its less than
the real file size. This may lead to a subsequent passing of a
negative number into the CircleBuf which leads to a endless
loop that writes data.
Thanks to Anton Blanchard for the analysis and initial patch.
LP: #1445239
|
|
We do this in HTTP already to give the CPU some exercise while the disk
is heavily spinning (or flashing?) to store the data avoiding the need
to reread the entire file again later on to calculate the hashes – which
happens outside of the eyes of progress reporting, so you might ended up
with a bunch of https workers 'stuck' at 100% while they were busy
calculating hashes.
This is a bummer for everyone using apt as a connection speedtest as the
https method works slower now (not really, it just isn't reporting done
too early anymore).
|
|
Methods get told which hashes are expected by the acquire system, which
means we can use this list to restrict what we calculate in the methods
as any extra we are calculating is wasted effort as we can't compare it
with anything anyway.
Adding support for a new hash algorithm is therefore 'free' now and if a
algorithm is no longer provided in a repository for a file, we
automatically stop calculating it.
In practice this results in a speed-up in Debian as we don't have SHA512
here (so far), so we practically stop calculating it.
|
|
Servers who advertise that they close the connection get the 'Closes'
encoding flag, but this conflicts with servers who response with a
transfer-encoding (e.g. encoding) as it is saved in the same flag.
We have a better flag for the keep-alive (or not) of the connection
anyway, so we check this instead of the encoding.
This is in practice not much of a problem as real servers we talk to are
HTTP1.1 servers (with keep-alive) and there isn't much point in doing
chunked encoding if you are going to close anyway, but our simple
testserver stumbles over this if pressed and its a bit cleaner, too.
Git-Dch: Ignore
|
|
Bug #778375 uncovered that https wasn't properly integrated in the class
family tree of http as it was supposed to be leading to a NULL pointer
dereference. Fixing this 'properly' was deemed to much diff for
practically no gain that late in the release, so commit
0c2dc43d4fe1d026650b5e2920a021557f9534a6 just fixed the synptom, while
this commit here is fixing the cause plus adding a test.
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
(The tests and their binary helpers had to be slightly modified to
apply, but the patch to fix the issue itself is unchanged.)
Closes: 768797
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
Closes: 768797
|
|
Do not drop privileges in the methods when using a older version of
libapt that does not support the chown magic in partial/ yet. To
do this DropPrivileges() now will ignore a empty Apt::Sandbox::User.
Cleanup all hardcoded _apt along the way.
|
|
Communicate the fail reason from the methods to the parent
and Rename() failed files.
|
|
|
|
|
|
When doing Acquire::http{,s}::Proxy-Auto-Detect, run the auto-detect
command for each host instead of only once. This should make using
"proxy" from libproxy-tools feasible which can then be used for PAC
style or other proxy configurations.
Closes: #759264
|
|
This ensures that we can stop downloading if the server send
too much data by accident (or by a malicious attempt)
|
|
|
|
beside reducing code a bit, it avoids oddball problems while building
the string and doesn't trigger static analyse warnings.
|
|
Git-Dch: Ignore
Reported-By: gcc -Wsuggest-attribute={pure,const,noreturn}
|
|
Beside being a bit cleaner it hopefully also resolves oddball problems
I have with high levels of parallel jobs.
Git-Dch: Ignore
Reported-By: iwyu (include-what-you-use)
|
|
server.cc: In member function ‘bool ServerState::HeaderLine(std::string)’:
server.cc:198:72: warning: format ‘%llu’ expects argument of type ‘long long unsigned int*’, but argument 3 has type ‘long long int*’ [-Wformat=]
else if (sscanf(Val.c_str(),"bytes %llu-%*u/%llu",&StartPos,&Size) != 2)
Git-Dch: Ignore
Reported-By: gcc -Wpedantic
|
|
Git-Dch: Ignore
Reported-By: gcc -Wpedantic
|
|
|
|
Reported-By: cppcheck
Git-Dch: Ignore
|
|
The most "visible" change is from utime to utimensat/futimens
as the first one isn't part of POSIX anymore.
Reported-By: cppcheck
Git-Dch: Ignore
|
|
|
|
Servers might respond with a complete file either because they don't
support Ranges at all or the If-Range condition isn't statisfied, so we
have to parse the headers curl gets ourself to seek or truncate the file
we have so far.
This also finially adds the testcase testing a bunch of partial
situations for both, http and https - which is now all green.
Closes: 617643, 667699
LP: 1157943
|
|
No effective behavior change, just shuffling big junks of code between
methods and classes to split them into those strongly related to our
client implementation and those implementing HTTP.
The idea is to get HTTPS to a point in which most of the implementation
can be shared even though the client implementations itself is
completely different. This isn't anywhere near yet though, but it should
beenough to reuse at least a few lines from http in https now.
Git-Dch: Ignore
|
|
Our http client requests the "filesize - 1" for the small edgecase of
handling a file which was completely downloaded, but not yet moved to
the correct place as we get 416 errors in that case, but as we can
handle 416 returns now we just special-case the situation of requesting
the exact filesize and handle it as a 200 without content instead.
|
|
If we get a 416 from the server it means the Range we asked for is above
the real filesize of the file on the server. Mostly this happens if the
server isn't supporting If-Range, but regardless of how we end up with
the partial data, the data is invalid so we discard it and retry with a
fresh plate and hope for the best.
Old behavior was to consider 416 an error and retry with a different
compression until we ran out of compression and requested the
uncompressed file (which doesn't exist on most mirrors) with an accept
line which server answered with "406 Not Acceptable".
Closes: 710924
|
|
|
|
Commit 2b9c9b7f28b18f6ae3e422020e8934872b06c9f3 not only removes
keep-alive, but also changes the request URI send to proxies which are
required to be absolute URIs rather than the usual absolute paths.
Closes: 717891
|
|
|
|
|
|
|
|
(LP: #1003633)
|
|
- quote spaces in filenames to ensure as the http method is also
(potentially) used for non deb,dsc content that may contain
spaces, thanks to Daniel Hartwig and Thomas Bushnell
(LP: #1086997)
|
|
|
|
- after many years of pointless discussions disable http/1.1 pipelining
by default as many webservers and proxies seem to be unable to conform
to specification must's (rfc2616 section 8.1.2.2) (LP: #996151)
|
|
- handle redirections in the worker with the right method instead of
in the method the redirection occured in (Closes: #668111)
* methods/http.cc:
- forbid redirects to change protocol
|
|
with g++-4.1 it complains about this so lets be extra clear
|
|
to the more standard PACKAGE_VERSION and make it work in every file
|
|
|