Age | Commit message (Collapse) | Author |
|
All other reasons from methods/connect.cc were already included.
Git-Dch: Ignore
|
|
Reported-By: gcc
Understandable: no
Git-Dch: Ignore
|
|
We want to declare some hashes as not enough for security, so that a
user will need --allow-unauthenticated or similar to get data secured
only by those hashes, but we can still us these hashes for integrity
checks if we got them.
|
|
Fetched() was reported for mostly nothing, while we should be calling it
for files worked with from non-local sources (e.g. http, but not file or
xz). Previously this was called from an acquire item, but got moved to
the acquire worker instead to avoid having it (re)implemented in all
items, but the checks were faulty.
|
|
Git-Dch: ignore
|
|
|
|
Thanks: Andre Felipe Machado for initial patch
Closes: 414848
|
|
Reporting errors from Done() is bad for progress reporting and such, so
factoring this out is a good idea and we start with moving the supposed-
to-be clearsigned file isn't clearsigned out first – improving the error
message in the process as we use the same message for a similar case
(NODATA) as this is what I have to look at with the venue wifi at
DebCamp and the old errormessage doesn't really say anything.
|
|
Redirectors like httpredir.debian.org orchestra the download from
multiple (hopefully close) mirrors while having only a single central
sources.list entry by using redirects. This has the effect that the
progress report always shows the source it started with, not the mirror
it ends up fetching from, which is especially problematic for error
reporting as having a report for a "Hashsum mismatch" for the redirector
URI is next to useless as nobody knows which URI it was really fetched
from (regardless of it coming from a user or via the report script) from
this output alone. You would need to enable debug output and hope for
the same situation to arise again…
We hence reuse the UsedMirror field of the mirror:// method and detect
redirects which change the site and declare this new site as the
UsedMirrror (and adapt the description).
The disadvantage is that there is no obvious mapping anymore (it is
relatively easy to guess through with some experience) from progress
lines to sources.list lines, so error messages need to take care to use
the Target description (rather than current Item description) if they
want to refer to the sources.list entry.
|
|
Various small leaks here and there. Nothing particularily big, but still
good to fix. Found by the sanitizers while running our testcases.
Reported-By: gcc -fsanitize
Git-Dch: Ignore
|
|
Doing this disables the implicit copy assignment operator (among others)
which would cause hovac if used on the classes as it would just copy the
pointer, not the data the d-pointer points to. For most of the classes
we don't need a copy assignment operator anyway and in many classes it
was broken before as many contain a pointer of some sort.
Only for our Cacheset Container interfaces we define an explicit copy
assignment operator which could later be implemented to copy the data
from one d-pointer to the other if we need it.
Git-Dch: Ignore
|
|
Some of them modify the ABI, but given that we prepare a big one
already, these few hardly count for much.
Git-Dch: Ignore
|
|
All other methods call it, so they should follow along even if the work
they do afterwards is hardly breathtaking and usually results in a
URIDone pretty soon, but the acquire system tells the individual item
about this via a virtual method call, so even through none of our
existing items contains any critical code in these, maybe one day they
might. Consistency at least once…
Which is also why this has a good sideeffect: file: and cdrom: requests
appear now in the 'apt-get update' output. Finally - it never made sense
to hide them for me. Okay, I guess it made before the new hit behavior,
but now that you can actually see the difference in an update it makes
sense to see if a file: repository changed or not as well.
|
|
This is an unlikely event for indexes and co, but it can happen quiet
easily e.g. for changelogs where you want to get the changelogs for
multiple binary package(version)s which happen to all be built from a
single source.
The interesting part is that the Acquire system actually detected this
already and set the item requesting the URI again to StatDone - expect
that this is hardly sufficient: an Item must be Complete=true as well
to be considered truely done and that is only the tip of the ::Done
handling iceberg. So instead of this StatDone hack we allow QItems to be
owned by multiple items and notify all owners about everything now,
so that for the point of each item they got it downloaded just for them.
|
|
Having every item having its own code to verify the file(s) it handles
is an errorprune process and easy to break, especially if items move
through various stages (download, uncompress, patching, …). With a giant
rework we centralize (most of) the verification to have a better
enforcement rate and (hopefully) less chance for bugs, but it breaks the
ABI bigtime in exchange – and as we break it anyway, it is broken even
harder.
It shouldn't effect most frontends as they don't deal with the acquire
system at all or implement their own items, but some do and will need to
be patched (might be an opportunity to use apt on-board material).
The theory is simple: Items implement methods to decide if hashes need to
be checked (in this stage) and to return the expected hashes for this
item (in this stage). The verification itself is done in worker message
passing which has the benefit that a hashsum error is now a proper error
for the acquire system rather than a Done() which is later revised to a
Failed().
|
|
Not all servers we are talking to support If-Modified-Since and some are
not even sending Last-Modified for us, so in an effort to detect such
hits we run a hashsum check on the 'old' compared to the 'new' file, we
got the hashes for the 'new' already for "free" from the methods anyway
and hence just need to calculated the old ones.
This allows us to detect hits even with unsupported servers, which in
turn means we benefit from all the new hit behavior also here.
|
|
Its a bit unpredictable which permissons and owners we will encounter on
a CD-ROM (or a USB stick, as apt-cdrom is responsible for those too),
so we have to ensure in this codepath as well that everything is nicely
setup without waiting for a 'apt-get update' to fix up the (potential)
mess.
|
|
Git-Dch: Ignore
|
|
The worker is the part closest to the methods, which will call the item
methods according to what it gets back from the methods, it is therefore
a better place to change permissions as it is very central and can do it
now at the point the item is assigned to a method rather than then it is
queued for download (and as before while dequeued via Done/Failure).
Git-Dch: Ignore
|
|
|
|
|
|
Using a different user for calling methods is intended to protect us
from methods running amok (via remotely exploited bugs) by limiting what
can be done by them. By using root:root for the final directories and
just have the files in partial writeable by the methods we enhance this
in sofar as a method can't modify already verified data in its parent
directory anymore.
As a side effect, this also clears most of the problems you could have
if the final directories are shared without user-sharing or if these
directories disappear as they are now again root owned and only the
partial directories contain _apt owned files (usually none if apt isn't
running) and the directory itself is autocreated with the right
permissions.
|
|
This ensures that we can stop downloading if the server send
too much data by accident (or by a malicious attempt)
|
|
Now that we have all hashes in the acquire system, pass the info down to
the methods, so that it can use it in the request and/or to precheck the
response.
|
|
It is not very extensible to have the supported Hashes hardcoded
everywhere and especially if it is part of virtual method names.
It is also possible that a method does not support the 'best' hash
(yet), so we might end up not being able to verify a file even though we
have a common subset of supported hashes. And those are just two of the
cases in which it is handy to have a more dynamic selection.
The downside is that this is a MAJOR API break, but the HashStringList
has a string constructor for compatibility, so with a bit of luck the
few frontends playing with the acquire system directly are okay.
|
|
Beside being a bit cleaner it hopefully also resolves oddball problems
I have with high levels of parallel jobs.
Git-Dch: Ignore
Reported-By: iwyu (include-what-you-use)
|
|
|
|
switch protocols at random is a bad idea if e.g. http can switch to
file, so we limit the possibilities to http to http and http to https.
As very few people (less than 1% according to popcon) have https
installed this likely changes nothing in terms of failure. The commit is
adding a friendly hint which package needs to be installed though.
|
|
(closes: #705648)
|
|
- handle redirections in the worker with the right method instead of
in the method the redirection occured in (Closes: #668111)
* methods/http.cc:
- forbid redirects to change protocol
|
|
- revert the use of FileFd::Write in OutFdReady as we don't want error
reports about EAGAIN here as we retry later. Thanks to YOSHINO Yoshihito
for the report. (Closes: #671721)
|
|
- use Dump() to generate the configuration message for sending
|
|
|
|
- check return of write() as gcc recommends
* apt-pkg/acquire.cc:
- check return of write() as gcc recommends
* apt-pkg/cdrom.cc:
- check return of chdir() and link() as gcc recommends
* apt-pkg/clean.cc:
- check return of chdir() as gcc recommends
* apt-pkg/contrib/netrc.cc:
- check return of asprintf() as gcc recommends
|
|
size are pretty unlikely for now, but we need it for deb
packages which could become bigger than 4GB now (LP: #815895)
|
|
|
|
- try even harder to support really big files in the fetcher by
converting (hopefully) everything to 'long long' (Closes: #632271)
|
|
- print filename in the unmatching size warning (Closes: #623137)
|
|
- when downloading data, show the mirror being used
|
|
- show error details of failed methods
* apt-pkg/contrib/fileutl.cc:
- if a process aborts with signal, show signal number
* methods/http.cc:
- ignore SIGPIPE, we deal with EPIPE from write in
HttpMethod::ServerDie() (LP: #385144)
|
|
Jeff Licquia and Anthony Towns
|
|
- consider a ResolveError a transient-network problem
|
|
- only pass a hash if we actually got one from the method
* methods/copy.cc:
- take hashes here too (*sigh*)
|
|
- rename "hash" into ExpectedHash in pkgAcqFile, pkgAcqIndex
- add missing HashSum() call to class pkgAcqIndex
- use the data provided by acquire-method (and send via the
{SHA256,SHA1,MD5Sum}-Hash tag when comparing the hash, this
avoids calculating the hash twice (just like old libapt)
* apt-pkg/acquire-method.cc:
- send MD5Sum-Hash tag to libapt to be consistant with
HashString::SupportedHashes()
* apt-pkg/acquire-worker.cc:
- check with "Owner->HashSum().HashType()" what hash the frontend
is expecting and pass it to pkgAcquireItem::Done() in the new
HashString format
- add some debugging output
* apt-pkg/contrib/hashes.cc:
- fix off-by-one error when constructing a HashString from a single
input string
* apt-pkg/contrib/hashes.h:
- add "HashType()" method
* apt-pkg/init.h, apt-pkg/makefile, methods/makefile:
- break ABI
|
|
|
|
|
|
apt-pkg/acquire-item.h:
- add new pkgAcquire::Item::StatTransientNetworkError status
apt-pkg/acquire-item.cc:
- if we get a StatTransientNetworkError use old sigfile and indexfiles
apt-pkg/acquire-worker.cc:
- set StatTransientNetworkError on "Timeout", "TmpResolveFailure", "ConnectionRefused"
cmdline/apt-get.cc:
- handle a StatTransientNetworkError different than a normal error (warning instead of error)
|
|
|
|
|
|
Author: jgg
Date: 2001-05-22 04:42:54 GMT
G++3 fixes from Randolph
|