Age | Commit message (Collapse) | Author |
|
Methods get told which hashes are expected by the acquire system, which
means we can use this list to restrict what we calculate in the methods
as any extra we are calculating is wasted effort as we can't compare it
with anything anyway.
Adding support for a new hash algorithm is therefore 'free' now and if a
algorithm is no longer provided in a repository for a file, we
automatically stop calculating it.
In practice this results in a speed-up in Debian as we don't have SHA512
here (so far), so we practically stop calculating it.
|
|
Servers who advertise that they close the connection get the 'Closes'
encoding flag, but this conflicts with servers who response with a
transfer-encoding (e.g. encoding) as it is saved in the same flag.
We have a better flag for the keep-alive (or not) of the connection
anyway, so we check this instead of the encoding.
This is in practice not much of a problem as real servers we talk to are
HTTP1.1 servers (with keep-alive) and there isn't much point in doing
chunked encoding if you are going to close anyway, but our simple
testserver stumbles over this if pressed and its a bit cleaner, too.
Git-Dch: Ignore
|
|
file sends information about the uncompressed file if it can find it as
well as for the compressed file. This was done only for gzip so far, but
we support more compression types. That this information isn't used a
lot is a different story.
Git-Dch: Ignore
|
|
Git-Dch: Ignore
|
|
The worker expects that the methods tell him when they start or finish
downloading a file. Various information pieces are passed along in this
report including the (expected) filesize. https was using a "global"
struct for reporting which made it 'reuse' incorrect values in some
cases like a non-existent InRelease fallbacking to Release{,.gpg}
resulting in a size-mismatch warning. Reducing the scope and redesigning
the setting of the values we can fix this and related issues.
Closes: 777565, 781509
Thanks: Robert Edmonds and Anders Kaseorg for initial patchs
|
|
It might be quite interesting which file (content) made curl freak out
and other methods keep the file around as well.
Git-Dch: Ignore
|
|
Working with strings c-style is complicated and error-prune,
so by converting to c++ style we gain some simplicity and
avoid buffer overflows by later extensions.
Git-Dch: Ignore
|
|
Bug #778375 uncovered that https wasn't properly integrated in the class
family tree of http as it was supposed to be leading to a NULL pointer
dereference. Fixing this 'properly' was deemed to much diff for
practically no gain that late in the release, so commit
0c2dc43d4fe1d026650b5e2920a021557f9534a6 just fixed the synptom, while
this commit here is fixing the cause plus adding a test.
|
|
|
|
Do not crash in ServerState::HeaderLine if there is no Owner.
Closes: #778375
|
|
Add a explicit ReceivedData to HttpsMethod that indicates when
we got data from the connection so that we can send URISTart()
to the parent.
This is needed because URIStart got moved in f9b4f12d from
the progress_callback to write_data() and it only checks for
Res.Size. In the old code if progress_callback is called by
libcurl (and sets Res.Size) before write_data is called then
URIStart() is never send. Making this a explicit ReceivedData
variable fixes this issue.
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
(The tests and their binary helpers had to be slightly modified to
apply, but the patch to fix the issue itself is unchanged.)
Closes: 768797
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
Closes: 768797
|
|
We use it in other places already as well even though it is farly new
addition to the POSIX family with 2008, but rolling our own here is
really something which should be avoided in such a important method.
Git-Dch: Ignore
|
|
'pos_is_okay'
It does not have any desired sideeffect, so we just mark it as const to
properly advertise this fact to developer, compiler and linter alike.
Reported-By: cppcheck
Git-Dch: Ignore
|
|
|
|
Do not drop privileges in the methods when using a older version of
libapt that does not support the chown magic in partial/ yet. To
do this DropPrivileges() now will ignore a empty Apt::Sandbox::User.
Cleanup all hardcoded _apt along the way.
|
|
Instead of using strcat use a C++ std::string to avoid overflowing
this buffer. Thanks to David Garfield
Closes: #76442
|
|
Git-Dch: ignore
|
|
The Maximum-Size protection breaks the http pipeline reorder code
because it relies on that the object got fetched entirely so that
it can compare the hash of the downloaded data. So instead of
stopping when the Maximum-Size of the expected item is reached we
only stop when the maximum size of the biggest item in the queue
is reached. This way the pipeline reoder code keeps working.
|
|
Communicate the fail reason from the methods to the parent
and Rename() failed files.
|
|
|
|
|
|
|
|
|
|
|
|
feature/acq-trans
Conflicts:
apt-pkg/acquire-item.cc
apt-pkg/acquire-item.h
methods/gpgv.cc
|
|
Conflicts:
apt-pkg/acquire-item.cc
|
|
'unsigned int'
Git-Dch: Ignore
Reported-By: cppcheck
|
|
Git-Dch: Ignore
|
|
Reported-By: cppcheck
Git-Dch: Ignore
|
|
Dch-Ignore: true
|
|
feature/acq-trans
|
|
|
|
|
|
Add a new "Debian-apt" user that owns the /var/lib/apt/lists
and /var/cache/apt/archive directories. The methods
http, https, ftp, gpgv, gzip switch to this user when they
start.
Thanks to Julian and "ioerror" and tors "switch_id()" code.
|
|
feature/acq-trans
Conflicts:
apt-pkg/acquire-item.cc
apt-pkg/acquire-item.h
methods/copy.cc
test/integration/test-hashsum-verification
|
|
Conflicts:
apt-pkg/acquire-item.cc
apt-pkg/acquire-item.h
apt-pkg/cachefilter.h
configure.ac
debian/changelog
|
|
|
|
When we do a ReverifyAfterIMS() we use the copy: method to
verify the hashes again. If the user uses -o Dir=./something/relative
this fails because we use the URI class in copy.cc that strips
away the leading relative part. By not using URI this is fixed.
Closes: #762160
|
|
incorrect invalidating of unauthenticated data (CVE-2014-0488)
incorect verification of 304 reply (CVE-2014-0487)
incorrect verification of Acquire::Gzip indexes (CVE-2014-0489)
|
|
Prefix all answers with the URL that the answer is for. This
helps when debugging and pipeline is enabled.
|
|
feature/acq-trans
|
|
Conflicts:
apt-pkg/acquire-item.cc
configure.ac
debian/changelog
doc/apt-verbatim.ent
doc/po/apt-doc.pot
doc/po/de.po
doc/po/es.po
doc/po/fr.po
doc/po/it.po
doc/po/ja.po
doc/po/pt.po
po/ar.po
po/ast.po
po/bg.po
po/bs.po
po/ca.po
po/cs.po
po/cy.po
po/da.po
po/de.po
po/dz.po
po/el.po
po/es.po
po/eu.po
po/fi.po
po/fr.po
po/gl.po
po/hu.po
po/it.po
po/ja.po
po/km.po
po/ko.po
po/ku.po
po/lt.po
po/mr.po
po/nb.po
po/ne.po
po/nl.po
po/nn.po
po/pl.po
po/pt.po
po/pt_BR.po
po/ro.po
po/ru.po
po/sk.po
po/sl.po
po/sv.po
po/th.po
po/tl.po
po/tr.po
po/uk.po
po/vi.po
po/zh_CN.po
po/zh_TW.po
test/integration/test-ubuntu-bug-346386-apt-get-update-paywall
|
|
When doing Acquire::http{,s}::Proxy-Auto-Detect, run the auto-detect
command for each host instead of only once. This should make using
"proxy" from libproxy-tools feasible which can then be used for PAC
style or other proxy configurations.
Closes: #759264
|
|
|
|
This ensures that we can stop downloading if the server send
too much data by accident (or by a malicious attempt)
|
|
|
|
The old way of handling this was that pkgAcqMetaIndex was responsible
to check/move both Release and Release.gpg in place. This breaks
the assumption of the transaction that each pkgAcquire::Item has
a single File that its responsible for.
|
|
Conflicts:
apt-pkg/deb/deblistparser.cc
doc/po/apt-doc.pot
doc/po/de.po
doc/po/es.po
doc/po/fr.po
doc/po/it.po
doc/po/ja.po
doc/po/pl.po
doc/po/pt.po
doc/po/pt_BR.po
po/da.po
po/mr.po
po/vi.po
|