Age | Commit message (Collapse) | Author |
|
They are the small brothers of the hashsum mismatch, so they deserve a
similar treatment even through we have for architectual reasons not a
much to display as for hashsum mismatches for now.
|
|
Users tend to report these errors with just this error message… not very
actionable and hard to figure out if this is a temporary or 'permanent'
mirror-sync issue or even the occasional apt bug.
Showing the involved hashsums and modification times should help in
triaging these kind of bugs – and eventually we will have less of them
via by-hash.
The subheaders aren't marked for translation for now as they are
technical glibberish and probably easier to deal with if not translated.
After all, our iconic "Hash Sum mismatch" is translated at least.
These additions were proposed in #817240 by Peter Palfrader.
|
|
With the previous commit we track the state of transactions, so we can
now use our knowledge to avoid processing data for a transaction which
was already closed (via an abort in this case).
This is needed as multiple independent processes are interacting in the
process, so there isn't a simple immediate full-engine stop and it would
also be bad to teach each and every item how to check if its manager
has failed subordinate and what to do in that case.
In the pdiff case, which deals (potentially) with many items during its
lifetime e.g. a hashsum mismatch in another file can abort the
transaction the file we try to patch via pdiff belongs to. This causes
some of the items (which are already done) to be aborted with it, but
items still in the process of acquisition continue in the processing and
will later try to use all the items together failing in strange ways as
cleanup already happened.
The chosen solution is to dry up the communication channels instead by
ignoring new requests for data acquisition, canceling requests which are
not assigned to a queue and not calling Done/Failed on items anymore.
This means that e.g. already started or pending (e.g. pipelined)
downloads aren't stopped and continue as normal for now, but they remain
in partial/ and aren't processed further so the next update command will
pick them up and put them to good use while the current process fails
updating (for this transaction group) in an orderly fashion.
Closes: 817240
Thanks: Barr Detwix & Vincent Lefevre for log files
|
|
This makes the new GPG related warnings much nicer to read,
for example, the second one here replaces the first one:
W: gpgv:/var/lib/apt/lists/example.com_dists_stable_InRelease: Weak ...
W: http://example.com/dists/stable/InRelease: Weak ...
|
|
This can be used by workers to send warnings to the main
program. The messages will be passed to _error->Warning()
by APT with the URI prepended.
We are not going to make that really public now, as the
interface might change a bit.
|
|
Reported-By: cppcheck
Git-Dch: Ignore
|
|
In 0940230d we started dropping privileges for file (and a bit later for
copy, too) with the intend of uniforming this for all methods. The
commit message says that the source will likely fail based on the
compressors already – and there isn't much secret in the repository
content. After all, after apt has run the update everyone can access the
content via apt anyway…
There are sources through which worked before which are mostly
single-deb (and those with the uncompressed files available).
The first one being especially surprising for users maybe, so instead of
failing, we make it so that apt detects that it can't access a source as
_apt and if so doesn't drop (for all sources!) privileges – but we limit
this to file/copy, so the uncompress which might be needed will still
fail – but that failed before this regression.
We display a notice about this, mostly so that if it still fails (e.g.
compressed) the user has some idea what is wrong.
Closes: 805069
|
|
Unlinking /dev/null is bad, we shouldn't do that. Also, we should print
at least a warning if we tried to unlink a file but didn't manage to
pull it of (ignoring the case were the file is /dev/null or doesn't
exist in the first place).
This got triggered by a relatively unlikely to cause problem in
pkgAcquire::Worker::PrepareFiles which would while temporary
uncompressed files (which are set to keep compressed) figure out that to
files are the same and prepare for sharing by deleting them. Bad move.
That also shows why not printing a warning is a bad idea as this hide
the error for in non-root test runs.
Git-Dch: Ignore
|
|
All other reasons from methods/connect.cc were already included.
Git-Dch: Ignore
|
|
Reported-By: gcc
Understandable: no
Git-Dch: Ignore
|
|
We want to declare some hashes as not enough for security, so that a
user will need --allow-unauthenticated or similar to get data secured
only by those hashes, but we can still us these hashes for integrity
checks if we got them.
|
|
Fetched() was reported for mostly nothing, while we should be calling it
for files worked with from non-local sources (e.g. http, but not file or
xz). Previously this was called from an acquire item, but got moved to
the acquire worker instead to avoid having it (re)implemented in all
items, but the checks were faulty.
|
|
Git-Dch: ignore
|
|
|
|
Thanks: Andre Felipe Machado for initial patch
Closes: 414848
|
|
Reporting errors from Done() is bad for progress reporting and such, so
factoring this out is a good idea and we start with moving the supposed-
to-be clearsigned file isn't clearsigned out first – improving the error
message in the process as we use the same message for a similar case
(NODATA) as this is what I have to look at with the venue wifi at
DebCamp and the old errormessage doesn't really say anything.
|
|
Redirectors like httpredir.debian.org orchestra the download from
multiple (hopefully close) mirrors while having only a single central
sources.list entry by using redirects. This has the effect that the
progress report always shows the source it started with, not the mirror
it ends up fetching from, which is especially problematic for error
reporting as having a report for a "Hashsum mismatch" for the redirector
URI is next to useless as nobody knows which URI it was really fetched
from (regardless of it coming from a user or via the report script) from
this output alone. You would need to enable debug output and hope for
the same situation to arise again…
We hence reuse the UsedMirror field of the mirror:// method and detect
redirects which change the site and declare this new site as the
UsedMirrror (and adapt the description).
The disadvantage is that there is no obvious mapping anymore (it is
relatively easy to guess through with some experience) from progress
lines to sources.list lines, so error messages need to take care to use
the Target description (rather than current Item description) if they
want to refer to the sources.list entry.
|
|
Various small leaks here and there. Nothing particularily big, but still
good to fix. Found by the sanitizers while running our testcases.
Reported-By: gcc -fsanitize
Git-Dch: Ignore
|
|
Doing this disables the implicit copy assignment operator (among others)
which would cause hovac if used on the classes as it would just copy the
pointer, not the data the d-pointer points to. For most of the classes
we don't need a copy assignment operator anyway and in many classes it
was broken before as many contain a pointer of some sort.
Only for our Cacheset Container interfaces we define an explicit copy
assignment operator which could later be implemented to copy the data
from one d-pointer to the other if we need it.
Git-Dch: Ignore
|
|
Some of them modify the ABI, but given that we prepare a big one
already, these few hardly count for much.
Git-Dch: Ignore
|
|
All other methods call it, so they should follow along even if the work
they do afterwards is hardly breathtaking and usually results in a
URIDone pretty soon, but the acquire system tells the individual item
about this via a virtual method call, so even through none of our
existing items contains any critical code in these, maybe one day they
might. Consistency at least once…
Which is also why this has a good sideeffect: file: and cdrom: requests
appear now in the 'apt-get update' output. Finally - it never made sense
to hide them for me. Okay, I guess it made before the new hit behavior,
but now that you can actually see the difference in an update it makes
sense to see if a file: repository changed or not as well.
|
|
This is an unlikely event for indexes and co, but it can happen quiet
easily e.g. for changelogs where you want to get the changelogs for
multiple binary package(version)s which happen to all be built from a
single source.
The interesting part is that the Acquire system actually detected this
already and set the item requesting the URI again to StatDone - expect
that this is hardly sufficient: an Item must be Complete=true as well
to be considered truely done and that is only the tip of the ::Done
handling iceberg. So instead of this StatDone hack we allow QItems to be
owned by multiple items and notify all owners about everything now,
so that for the point of each item they got it downloaded just for them.
|
|
Having every item having its own code to verify the file(s) it handles
is an errorprune process and easy to break, especially if items move
through various stages (download, uncompress, patching, …). With a giant
rework we centralize (most of) the verification to have a better
enforcement rate and (hopefully) less chance for bugs, but it breaks the
ABI bigtime in exchange – and as we break it anyway, it is broken even
harder.
It shouldn't effect most frontends as they don't deal with the acquire
system at all or implement their own items, but some do and will need to
be patched (might be an opportunity to use apt on-board material).
The theory is simple: Items implement methods to decide if hashes need to
be checked (in this stage) and to return the expected hashes for this
item (in this stage). The verification itself is done in worker message
passing which has the benefit that a hashsum error is now a proper error
for the acquire system rather than a Done() which is later revised to a
Failed().
|
|
Not all servers we are talking to support If-Modified-Since and some are
not even sending Last-Modified for us, so in an effort to detect such
hits we run a hashsum check on the 'old' compared to the 'new' file, we
got the hashes for the 'new' already for "free" from the methods anyway
and hence just need to calculated the old ones.
This allows us to detect hits even with unsupported servers, which in
turn means we benefit from all the new hit behavior also here.
|
|
Its a bit unpredictable which permissons and owners we will encounter on
a CD-ROM (or a USB stick, as apt-cdrom is responsible for those too),
so we have to ensure in this codepath as well that everything is nicely
setup without waiting for a 'apt-get update' to fix up the (potential)
mess.
|
|
Git-Dch: Ignore
|
|
The worker is the part closest to the methods, which will call the item
methods according to what it gets back from the methods, it is therefore
a better place to change permissions as it is very central and can do it
now at the point the item is assigned to a method rather than then it is
queued for download (and as before while dequeued via Done/Failure).
Git-Dch: Ignore
|
|
|
|
|
|
Using a different user for calling methods is intended to protect us
from methods running amok (via remotely exploited bugs) by limiting what
can be done by them. By using root:root for the final directories and
just have the files in partial writeable by the methods we enhance this
in sofar as a method can't modify already verified data in its parent
directory anymore.
As a side effect, this also clears most of the problems you could have
if the final directories are shared without user-sharing or if these
directories disappear as they are now again root owned and only the
partial directories contain _apt owned files (usually none if apt isn't
running) and the directory itself is autocreated with the right
permissions.
|
|
This ensures that we can stop downloading if the server send
too much data by accident (or by a malicious attempt)
|
|
Now that we have all hashes in the acquire system, pass the info down to
the methods, so that it can use it in the request and/or to precheck the
response.
|
|
It is not very extensible to have the supported Hashes hardcoded
everywhere and especially if it is part of virtual method names.
It is also possible that a method does not support the 'best' hash
(yet), so we might end up not being able to verify a file even though we
have a common subset of supported hashes. And those are just two of the
cases in which it is handy to have a more dynamic selection.
The downside is that this is a MAJOR API break, but the HashStringList
has a string constructor for compatibility, so with a bit of luck the
few frontends playing with the acquire system directly are okay.
|
|
Beside being a bit cleaner it hopefully also resolves oddball problems
I have with high levels of parallel jobs.
Git-Dch: Ignore
Reported-By: iwyu (include-what-you-use)
|
|
|
|
switch protocols at random is a bad idea if e.g. http can switch to
file, so we limit the possibilities to http to http and http to https.
As very few people (less than 1% according to popcon) have https
installed this likely changes nothing in terms of failure. The commit is
adding a friendly hint which package needs to be installed though.
|
|
(closes: #705648)
|
|
- handle redirections in the worker with the right method instead of
in the method the redirection occured in (Closes: #668111)
* methods/http.cc:
- forbid redirects to change protocol
|
|
- revert the use of FileFd::Write in OutFdReady as we don't want error
reports about EAGAIN here as we retry later. Thanks to YOSHINO Yoshihito
for the report. (Closes: #671721)
|
|
- use Dump() to generate the configuration message for sending
|
|
|
|
- check return of write() as gcc recommends
* apt-pkg/acquire.cc:
- check return of write() as gcc recommends
* apt-pkg/cdrom.cc:
- check return of chdir() and link() as gcc recommends
* apt-pkg/clean.cc:
- check return of chdir() as gcc recommends
* apt-pkg/contrib/netrc.cc:
- check return of asprintf() as gcc recommends
|
|
size are pretty unlikely for now, but we need it for deb
packages which could become bigger than 4GB now (LP: #815895)
|
|
|
|
- try even harder to support really big files in the fetcher by
converting (hopefully) everything to 'long long' (Closes: #632271)
|
|
- print filename in the unmatching size warning (Closes: #623137)
|
|
- when downloading data, show the mirror being used
|
|
- show error details of failed methods
* apt-pkg/contrib/fileutl.cc:
- if a process aborts with signal, show signal number
* methods/http.cc:
- ignore SIGPIPE, we deal with EPIPE from write in
HttpMethod::ServerDie() (LP: #385144)
|
|
Jeff Licquia and Anthony Towns
|
|
- consider a ResolveError a transient-network problem
|