Age | Commit message (Collapse) | Author |
|
Bug #778375 uncovered that https wasn't properly integrated in the class
family tree of http as it was supposed to be leading to a NULL pointer
dereference. Fixing this 'properly' was deemed to much diff for
practically no gain that late in the release, so commit
0c2dc43d4fe1d026650b5e2920a021557f9534a6 just fixed the synptom, while
this commit here is fixing the cause plus adding a test.
|
|
|
|
Add a explicit ReceivedData to HttpsMethod that indicates when
we got data from the connection so that we can send URISTart()
to the parent.
This is needed because URIStart got moved in f9b4f12d from
the progress_callback to write_data() and it only checks for
Res.Size. In the old code if progress_callback is called by
libcurl (and sets Res.Size) before write_data is called then
URIStart() is never send. Making this a explicit ReceivedData
variable fixes this issue.
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
(The tests and their binary helpers had to be slightly modified to
apply, but the patch to fix the issue itself is unchanged.)
Closes: 768797
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
Closes: 768797
|
|
Do not drop privileges in the methods when using a older version of
libapt that does not support the chown magic in partial/ yet. To
do this DropPrivileges() now will ignore a empty Apt::Sandbox::User.
Cleanup all hardcoded _apt along the way.
|
|
Communicate the fail reason from the methods to the parent
and Rename() failed files.
|
|
|
|
|
|
|
|
|
|
|
|
Add a new "Debian-apt" user that owns the /var/lib/apt/lists
and /var/cache/apt/archive directories. The methods
http, https, ftp, gpgv, gzip switch to this user when they
start.
Thanks to Julian and "ioerror" and tors "switch_id()" code.
|
|
When doing Acquire::http{,s}::Proxy-Auto-Detect, run the auto-detect
command for each host instead of only once. This should make using
"proxy" from libproxy-tools feasible which can then be used for PAC
style or other proxy configurations.
Closes: #759264
|
|
|
|
|
|
Beside being a bit cleaner it hopefully also resolves oddball problems
I have with high levels of parallel jobs.
Git-Dch: Ignore
Reported-By: iwyu (include-what-you-use)
|
|
Reported-By: gcc -Wunused-parameter
Git-Dch: Ignore
|
|
Git-Dch: Ignore
Reported-By: gcc -Wpedantic
|
|
|
|
Git-Dch: Ignore
|
|
This change prevents changing the protocol from https to http.
|
|
Reporting it via progress means that e.g. a redirect will trigger it,
too, so you get a Get & Hit while http only reports a Hit as it should
be.
|
|
cppcheck complains about the obsolete utime as it was removed in
POSIX1.2008 and recommends usage of utimensat/futimens instead
as those are in POSIX and so commit 9ce3cfc9 switched to them.
It is just that they aren't as portable as the standard suggests:
At least our kFreeBSD and Hurd ports stumble over it at runtime.
So to make both, the ports and cppcheck happy, we use utimes instead.
Closes: 738567
|
|
The most "visible" change is from utime to utimensat/futimens
as the first one isn't part of POSIX anymore.
Reported-By: cppcheck
Git-Dch: Ignore
|
|
Servers might respond with a complete file either because they don't
support Ranges at all or the If-Range condition isn't statisfied, so we
have to parse the headers curl gets ourself to seek or truncate the file
we have so far.
This also finially adds the testcase testing a bunch of partial
situations for both, http and https - which is now all green.
Closes: 617643, 667699
LP: 1157943
|
|
As lengthy discussed in lp:1157943 partial https support was utterly
broken as a 206 response was handled as an (unhandled) error. This is
the first part of fixing it by supporting a 206 response and starting to
deal with 416.
|
|
|
|
|
|
(closes: #705648)
|
|
- reuse connection in https, thanks to Thomas Bushnell, BSG for the
patch. LP: #1087543, Closes: #695359
|
|
|
|
to the more standard PACKAGE_VERSION and make it work in every file
|
|
- if a file without an extension is requested send an 'Accept: text/*'
header to avoid that the server chooses unsupported compressed files
in a content-negotation attempt (Closes: #657560)
|
|
- use curls list append instead of appending Range and If-Range by hand
which generates malformed requests, thanks Mel Collins for the hint!
(Closes: #646381)
|
|
on the FileFd instead
|
|
|
|
- fix double delete (LP: #848907)
- ignore only the invalid regexp instead of all options
* apt-pkg/acquire-item.h, apt-pkg/deb/debmetaindex.cc:
- fix fetching language information by adding OptionalSubIndexTarget
* methods/https.cc:
- cleanup broken downloads properly
* ftparchive/cachedb.cc:
- fix buffersize in bytes2hex
* apt-pkg/deb/deblistparser.cc:
- fix crash when the dynamic mmap needs to be grown in
LoadReleaseInfo (LP: #854090)
|
|
- cleanup broken downloads properly
|
|
|
|
size are pretty unlikely for now, but we need it for deb
packages which could become bigger than 4GB now (LP: #815895)
|
|
|
|
- fix CURLOPT_SSL_VERIFYHOST by really passing 2 to it if enabled
|
|
* spot & fix various typos in all manpages
* German manpage translation update
* cmdline/apt-cache.cc:
- remove translatable marker from the "%4i %s\n" string
* buildlib/po4a_manpage.mak:
- instruct debiandoc to build files with utf-8 encoding
* buildlib/tools.m4:
- fix some warning from the buildtools
* apt-pkg/acquire-item.cc:
- add configuration PDiffs::Limit-options to not download
too many or too big patches (Closes: #554349)
* debian/control:
- let all packages depend on ${misc:Depends}
* share/*-archive.gpg:
- remove the horrible outdated files. We already depend on
the keyring so we don't need to ship our own version
* cmdline/apt-key:
- errors out if wget is not installed (Closes: #545754)
- add --keyring option as we have now possibly many
* methods/gpgv.cc:
- pass all keyrings (TrustedParts) to gpgv instead of
using only one trusted.gpg keyring (Closes: #304846)
* methods/https.cc:
- finally merge the rest of the patchset from Arnaud Ebalard
with the CRL and Issuers options, thanks! (Closes: #485963)
|
|
|
|
otherwise in the Configuration class.
|
|
method as this is more sane than using only the http options without
a possibility to override these for https.
|
|
thanks Timothy J. Miller! (Closes: #355782)
|
|
|
|
/etc/apt/auth.conf that can be used to store username/passwords
in a "netrc" style file (with the extension that it supports "/"
in a machine definition). Based on the maemo git branch.
* apt-pkg/deb/dpkgpm.cc:
- add "purge" to list of known actions
|