Age | Commit message (Collapse) | Author |
|
Sometimes index files are in different locations in a repository as it
is currently the case for Contents files which are per-component in
Debian, but aren't in Ubuntu. This has historic reasons and is perhaps
changed soon, but such cases of transitions can always happen in the
future again, so we should prepare:
Introduced is a new field declaring that the current item should only be
downloaded if the mentioned item wasn't allowing for transitions without
a flagday in clients and archives.
This isn't implemented 'simpler' with multiple MetaKeys as items (could)
change their descriptions and perhaps also other configuration bits with
their location.
|
|
We don't have to initialize the Release files with a set of IndexTargets
to acquire, but instead wait for the Release file to be acquired and
only then ask which IndexTargets to get.
Git-Dch: Ignore
|
|
Progress reporting used an "upper bound" on files we might get, expect
that this wasn't correct in case pdiff entered the picture. So instead
of calculating a value which is perhaps incorrect, we just accept that
we can't tell how many files we are going to download and just keep at
0% until we know. Additionally, if we have pdiffs we wait until we got
these (sub)index files, too.
That could all be done better by downloading all Release files first and
planing with them in hand accordingly, but one step at a time.
|
|
The code naturally evolved from a TransactionManager optional to a
required setup which resulted in various places doing unneeded checks
suggesting a more complicated setup than is actually needed.
Git-Dch: Ignore
|
|
Commit 9b8034a9fd40b4d05075fda719e61f6eb4c45678 just deals with
InRelease properly and generates broken URIs in case the mirror (or the
achieve really) has no InRelease file.
[As this was in no released version no need to clutter changelog with a
fix notice.]
Git-Dch: Ignore
|
|
Daniel Kahn Gillmor highlights in the bugreport that security isn't
improving by having the user import additional keys – especially as
importing keys securely is hard.
The bugreport was initially about dropping the warning to a notice, but
in given the previously mentioned observation and the fact that we
weren't printing a warning (or a notice) for expired or revoked keys
providing a signature we drop it completely as the code to display a
message if this was the only key is in another path – and is considered
critical.
Closes: 618445
|
|
Redirection services like httpredir.debian.org tend to use a set of
mirrors from which they pick a mirror at "random" for each requested
file, which is usually benefitial for the download of debs, but for the
index files this can quickly cause problems (aka hashsum mismatches) if
the two (or more) mirrors involved are only slightly out-of-sync.
This commit "resolves" this issue by using the mirror we ended up using
to get the (signed) Release file directly to get the index files
belonging to this Release file instead of asking the redirection
service which eliminates the risk of hitting out-of-sync mirrors.
As an obvious downside the redirection service can't serve partial
mirrors anymore for indexes and the download of indexes indexed in the
same Release file can't be done in parallel (from different mirrors).
This does not effect the download of non-index files like deb-files as
out-of-sync mirrors aren't a huge problem there, so the parallel
download outweights a potentially 404 error (also because this causes no
errenous downloads while hashsum mismatches download the entire file
before finding out that it was pointless).
The rational for this is that indexes are relative to the Release file.
If we would be talking about a HTML page including images, such a
behaviour is obvious and intended – not doing it means in the best case
a bunch of "useless" requests which will all be answered with a
redirect.
|
|
They are the small brothers of the hashsum mismatch, so they deserve a
similar treatment even through we have for architectual reasons not a
much to display as for hashsum mismatches for now.
|
|
Users tend to report these errors with just this error message… not very
actionable and hard to figure out if this is a temporary or 'permanent'
mirror-sync issue or even the occasional apt bug.
Showing the involved hashsums and modification times should help in
triaging these kind of bugs – and eventually we will have less of them
via by-hash.
The subheaders aren't marked for translation for now as they are
technical glibberish and probably easier to deal with if not translated.
After all, our iconic "Hash Sum mismatch" is translated at least.
These additions were proposed in #817240 by Peter Palfrader.
|
|
Calling the (non-existent) reporter multiple times for the same error
with different codes for the same error (e.g. hashsum) is a bit strange.
It also doesn't need to be a public API. Ideally that would all look and
behave slightly different, but we will worry about that at the time this
is actually (planed to be) used somewhere…
Git-Dch: Ignore
|
|
Tested via (newly) empty index files, but effects also files dropped
from the repository or an otherwise changed repository config.
|
|
There is just no point in taking the time to acquire empty files –
especially as it will be tiny non-empty compressed files usually.
|
|
A silly of-by-one error in the stripping of the extension to check for
the uncompressed filename broken in an attempt to support all
compressions in commit a09f6eb8fc67cd2d836019f448f18580396185e5.
Fixing this highlights also mistakes in the handling of the Alt-Filename
in libapt which would cause apt to remove the file from the repository
(if root has the needed rights – aka the disk isn't readonly or similar)
|
|
With the previous commit we track the state of transactions, so we can
now use our knowledge to avoid processing data for a transaction which
was already closed (via an abort in this case).
This is needed as multiple independent processes are interacting in the
process, so there isn't a simple immediate full-engine stop and it would
also be bad to teach each and every item how to check if its manager
has failed subordinate and what to do in that case.
In the pdiff case, which deals (potentially) with many items during its
lifetime e.g. a hashsum mismatch in another file can abort the
transaction the file we try to patch via pdiff belongs to. This causes
some of the items (which are already done) to be aborted with it, but
items still in the process of acquisition continue in the processing and
will later try to use all the items together failing in strange ways as
cleanup already happened.
The chosen solution is to dry up the communication channels instead by
ignoring new requests for data acquisition, canceling requests which are
not assigned to a queue and not calling Done/Failed on items anymore.
This means that e.g. already started or pending (e.g. pipelined)
downloads aren't stopped and continue as normal for now, but they remain
in partial/ and aren't processed further so the next update command will
pick them up and put them to good use while the current process fails
updating (for this transaction group) in an orderly fashion.
Closes: 817240
Thanks: Barr Detwix & Vincent Lefevre for log files
|
|
We want to keep track of the state of a transaction overall to base
future decisions on it, but as a pre-requirement we have to make sure
that a transaction isn't commited twice (which happened if the download
of InRelease failed and Release takes over).
It also happened to create empty commits after a transaction was already
aborted in cases in which the Release files were rejected.
This isn't effecting security at the moment, but to ensure this isn't
happening again and can never be bad a bunch of fatal error messages are
added to make regressions on this front visible.
|
|
There is really no need to have the same code three times.
Git-Dch: Ignore
|
|
For the non-pdiff case, we have can have accurate progress
reporting because after fetching the {,In}Release files we know
how many IndexFiles will be fetched and what size they have.
Therefore init the filesize early (in pkgAcqIndex::Init) and
ensure that in Acquire::Pulse() looks at already downloaded
bits when calculating the progress in Acquire::Pulse.
Also improve debug output of Debug::acquire::progress
|
|
The URI descibing an item can change via mirrors/redirectors which
causes the .diff/Index files to get the wrong names in storage.
Git-Dch: Ignore
|
|
Now that we ignore SHA1-only files it makes sense to require also the
provision of hashes for the compressed patches as this was introduced in
the same patchset as support for non-SHA1 hashes in the file itself in
dak and adding support in other archive creators (if they support pdiffs
at all) will likely be in the same batch.
The reason for the change itself is simple: If you are 'scared' enough
about the security of SHA1, you shouldn't uncompress a file you haven't
verified at all – after all, it could be exploiting a bug or a zip bomb.
|
|
This effectively merges branch 'typofixes-vlajos-20150807' of github.com:vlajos/apt
with the following commit:
commit 13cacb3e2e2352ba701e769fc889e3344fabbf7e
Author: Veres Lajos <vlajos@gmail.com>
Date: Sun Aug 9 00:12:53 2015 +0100
typofix - https://github.com/vlajos/misspell_fixer
It has been rebased for a better commit message.
|
|
If a single pdiff fails, we have to fail the entire patching endeavour
and fall back to getting the complete file instead. That is easy in
serverside merged pdiffs as we get them one by one. For clientside we
get them all at once through, which means that a failure in one has to
stop the entire pipeline, which works as expected (as proven by the
bugreporters as they don't even notice it happening). The problem is
just that the first failing pdiff will do the cleanup, so another pdiff
which happens to be successfully acquired after we processed the failure
doesn't find the file it is supposed to use as a basename anymore, so
the patch is renamed to what should be the unique extension and moved
into the current working directory. Processing is then stopped as the
patch realizes that it isn't the last one which completed downloading.
On the plus side this means this is neither us using a bad temporary
location nor a security problem. It "just" overrides unconditionally
files in your current working directory (if you happen to have them
named like a pdiff patch – a bit unlikely perhaps) and so drops files
there which are never used again.
I guess this was introduced in 4e3c5633b1e74b4f58b95f339cfbbf4cbf21ab3e
for real as I made the need for the existence of the base file rather
explicit, but the potential lingers in the code for far longer.
Closes: #816837
|
|
Changelogs are relatively small and we have no hashes for them, but we
had partial support for them before, so lets stick to it.
This also deletes the (partial) file before moving the downloaded file
into its place – rename(2) should be doing this by itself, but testing
on semaphoreci suggests that this isn't always the case (error is "Stale
file handle") and we don't need an atomic replace here, so be explicit.
Git-Dch: Ignore
|
|
Reported-By: Helmut Grohne on IRC
|
|
pkgAcqChangelog has the default behaviour of downloading a changelog to
a temporary directory (inside /tmp, not /tmp directly), which is cleaned
up on shutdown, but this can be overridden to store the changelog more
permanently – but that caries a permission problem.
For changelog we can 'easily' solve this by always downloading to a
temporary directory and only move it out of there on done.
|
|
If pkgAcqChangelog is told to acquire the changelog for a version it
will check first if this version is installed on the disk and if so will
use the local changelog in /usr/share/doc (possibily/likely gz
compressed) instead of downloading the file from the web.
An option is provided to disable this, which is enabled by default for
the Ubuntu vendor as they truncate the local changelogs – and for apts
--print-uris action.
|
|
The code already deals with compressed leftovers, but forgot the
uncompressed files. The opertunity is picked to reorder this code and
add debug messages about the actions taken as well as produce such a
leftover file in the associated testcase.
|
|
With the addition of the $HASH-Download field in the .diff/Index we got
the size of the compressed patches for 'free', so if that information is
available we can use it for a more fitting calculation of the size
requirements of the patches vs. the complete file.
Note that this predicts a too small size in the transition case in which
the information isn't available for all patches, but figuring this out
would be a lot of code for practically nothing as only one update can
ever be in such a transition phase.
|
|
Downloading and storing are two different operations were different
compression types can be preferred. For downloading we provide the
choice via Acquire::CompressionTypes::Order as there is a choice to
be made between download size and speed – and limited by whats available
in the repository.
Storage on the other hand has all compressions currently supported by
apt available and to reduce runtime of tools accessing these files the
compression type should be a low-cost format in terms of decompression.
apt traditionally stores its indexes uncompressed on disk, but has
options to keep them compressed. Now that apt downloads additional files
we also deal with files which simply can't be stored uncompressed as
they are just too big (like Contents for apt-file). Traditionally they
are downloaded in a low-cost format (gz) as repositories do not provide
other formats, but there might be even lower-cost formats and for
download we could introduce higher-cost in the repositories.
Downloading an entire index potentially requires recompression to
another format, so an update takes potentially longer – but big files
are usually updated via pdiffs which has to de- and re-compress anyhow
and does it on the fly anyhow, so there is no extra time needed and in
general it seems to be benefitial to invest the time in update to save
time later on file access.
|
|
There is no reason to enforce that the file we start the bootstrap with
is compressed with a compressor which is available online. This allows
us to change the on-disk format as well as deals with repositories
adding/removing support for a specific compressor.
|
|
If we store files compressed in lists/ and the file switched compression
formats we happened to retain the "old" format, but by default the
cleanup process catched this oversight and removed the file.
[The initial situation described doesn't arise as we store no files by
default compressed and even with apt-file configuring Contents files, we
don't really have that problem as there is just .gz files for those.]
We solve this by just removing any uncompressed as well as compressed
(we support) file just before we move the 'new' version of the file in.
|
|
Adding a new compressor method meant adding a new method as well – even
if that boilt down to just linking to our generalized decompressor with
a new name. That is unneeded busywork if we can instead just call the
generalized decompressor and let it figure out which compressor to use
based on the filenames rather than by program name.
For compatibility we ship still 'gzip', 'bzip2' and co, but they are
just links to our "new" 'store' method.
|
|
Debian has a Packages file for arch:all already, but the arch:any files
contain arch:all packages as well, so downloading it would be a total
waste of resources. Getting this solved is on the list of things to do,
but it is also the hardest part – for index targets like Contents the
situation is much easier and less server/client implementations are
involved so we might not want to stall them.
A repository can now declare via:
No-Support-for-Architecture-all: Packages
that even if an arch:all Packages exists, it shouldn't be downloaded, so
that support for Contents files can be added now.
See also 1dd20368486820efb6ef4476ad739e967174bec4 for the implementation
of downloading arch:all index targets, which this is limiting.
The field uses the name of the target from the apt configuration for
simplicity and is negative by design as this field is intended to be
supported/needed only for a "short" time (one or two Debian releases).
While this commit theoretically supports any target, its expected to
only see "Packages" as a value in reality.
|
|
If we can't work with the hashes we parsed from the Release file we
display now an error message if the Release file includes only weak
hashes instead of downloading the indexes and failing to verify them
with "Hash Sum mismatch" even through the hashes didn't mismatch (they
were just weak).
If for some (unlikely) reason we have got weak hashes only for
individual targets we will show a warning to this effect (again, befor
downloading and failing the index itself).
Closes: 806459
|
|
Reversing the parsing order ensures that we parse weaker hashes (like
SHA1) before we touch newer/stronger hashes (like SHA256) as the weaker
ones will usually be there for a longer time already with data already
present, which we would discard if we start with the strong one first.
The discarding is visible in the debug logs:
File X wasn't in the list for the first parsed hash! (history)
File X wasn't in the list for the first parsed hash! (patches)
which if file X is part of the patch-path means apt will not find a path and
fallback to acquire the whole file instead needlessly.
If file X isn't part of the patch-path that is no problem, so that
effects only the update-call which updates with patches coming from
before and after the addition of a new hash.
|
|
This should make it more obvious that CHANGEPATH is a placeholder which
apt will replace with a package specific path rather than a string
constant.
Mail-Reference: <87d1upgvaf.fsf@deep-thought.43-1.org>
Mail-Archive: https://lists.debian.org/debian-dak/2015/12/msg00005.html
|
|
Git-Dch: Ignore
|
|
Reference mail:
https://lists.debian.org/debian-l10n-english/2015/11/msg00006.html
|
|
In 8d041b4f we made apt figure out based on the last Release file it has
if it should request a file or not given that the hashes changed or not.
So if we have a last Release file and do a request, do not sent a
Last-Modified header as we expect a change so much that a non-change
would indeed be an error. The Last-Modified header is therefore at best
ignored by the server, so sending it is just wasted effort. In the worst
case as time is a fragile thing the server decides against sending us an
update with the idea that we already have the latest content, which we
know for a fact that we haven't. Given that we sent less information to
the server our request is on its own also less identifiable as coming
from a returning or new user.
The disadvantage is that if we end up getting an old index file after
getting a new Release file from another mirror the old mirror will not
be able to tell us 'Hit', but instead sends us the complete file we
discard, but both lets us end up with the same error class in the end,
so the difference isn't big in practice.
|
|
This 'ignores' the component Release files you can find in Debian
alongside the binary-* directories, which isn't exactly a common
usecase, but it worked before, so lets support it again as this isn't
worse than a valid Release file which is unsigned.
Git-Dch: Ignore
|
|
Reported-By: cppcheck
Git-Dch: Ignore
|
|
Unlinking /dev/null is bad, we shouldn't do that. Also, we should print
at least a warning if we tried to unlink a file but didn't manage to
pull it of (ignoring the case were the file is /dev/null or doesn't
exist in the first place).
This got triggered by a relatively unlikely to cause problem in
pkgAcquire::Worker::PrepareFiles which would while temporary
uncompressed files (which are set to keep compressed) figure out that to
files are the same and prepare for sharing by deleting them. Bad move.
That also shows why not printing a warning is a bad idea as this hide
the error for in non-root test runs.
Git-Dch: Ignore
|
|
Based on a discussion with Niels Thykier who asked for Contents-all this
implements apt trying for all architecture dependent files to get a file
for the architecture all, which is treated internally now as an official
architecture which is always around (like native). This way arch:all
data can be shared instead of duplicated for each architecture requiring
the user to download the same information again and again.
There is one problem however: In Debian there is already a binary-all/
Packages file, but the binary-any files still include arch:all packages,
so that downloading this file now would be a waste of time, bandwidth
and diskspace. We therefore need a way to decide if it makes sense to
download the all file for Packages in Debian or not. The obvious answer
would be a special flag in the Release file indicating this, which would
need to default to 'no' and every reasonable repository would override
it to 'yes' in a few years time, but the flag would be there "forever".
Looking closer at a Release file we see the field "Architectures", which
doesn't include 'all' at the moment. With the idea outlined above that
'all' is a "proper" architecture now, we interpret this field as being
authoritative in declaring which architectures are supported by this
repository. If it says 'all', apt will try to get all, if not it will be
skipped. This gives us another interesting feature: If I configure a
source to download armel and mips, but it declares it supports only
armel apt will now print a notice saying as much. Previously this was a
very cryptic failure. If on the other hand the repository supports mips,
too, but for some reason doesn't ship mips packages at the moment, this
'missing' file is silently ignored (= that is the same as the repository
including an empty file).
The Architectures field isn't mandatory through, so if it isn't there,
we assume that every architecture is supported by this repository, which
skips the arch:all if not listed in the release file.
|
|
Removals in the acquire progress can be pretty important, so a failure
should be silently ignored, so we wrap our unlink call in a slightly
more forgiving wrapper checking things.
Git-Dch: Ignore
|
|
Git-Dch: Ignore
|
|
The manpage is also slightly updated to work better as a central hub to
push people from all angles into the right directions without writting a
book disguised as an error message.
|
|
Insecure (aka unsigned) repositories are bad, period. We want to get
right of them finally and as a first step we are printing scary
warnings. This is already done, this commit just changes the messages to
be more consistent and prevents them from being displayed if
authenticity is guaranteed some other way (as indicated with
trusted=yes).
The idea is to first print the pure fact like "repository isn't signed"
as a warning (and later as an error), while giving an explaination in a
immediately following notice (which is displayed only in quiet level 0:
so in interactive use, not in scripts and alike).
Closes: 796549
|
|
Commit 653ef26c70dc9c0e2cbfdd4e79117876bb63e87d broke the camels back in
sofar that everything works in terms of our internal use of copy:/, but
external use is completely destroyed. This is kinda the reverse of what
happened in "parallel" in the sid branch, where external use was mostly
fine, internal and external exploded on the GzipIndexes option.
We fix this now by rewriting our internal use by letting copy:/ only do
what the name suggests it does: Copy files and not uncompress them
on-the-fly. Then we teach copy and the uncompressors how to deal with
/dev/null and use it as destination file in case we don't want to store
the uncompressed files on disk.
Closes: 799158
|
|
We uses a small trick to implement the fallback: We make it so, that
by-hash is a special compression algorithm and apt already knows how to
deal with fallback between compression algorithms.
The drawback with implementing this fallback is that a) we are guessing
again and more importantly b) by-hash is only tried for the first
compression algorithm we want to acquire, not for all as before – but
flipping between by-hash and well-known for each compression algorithm
seems to be not really worth it as it seems unlikely that there will
actually be mirrors who only mirror a subset of compressioned files, but
have by-hash enabled.
The user-experience is the usual fallback one: You see "Ign" lines in
the apt update output. The fallback is implemented as a transition
feature, so a (potentially huge) mirror network doesn't need a flagday.
It is not meant as a "someday we might" or "we don't, but some of our
mirrors might" option – we want to cut down on the 'Ign' lines front so
that they become meaningful – if we wanted to spam everyone with them, we
could enable by-hash by default for all repositories…
sources.list and config options are better suited for this.
Closes: 798919
|
|
This changes the semantics of the option (which is renamed too) to be a
yes/no value with the special additional value "force" as this allows
by-hash to be disabled even if the repository indicates it would be
supported and is more in line with our other yes/no options like pdiff
which disable themselves if no support can be detected.
The feature wasn't documented so far and hasn't reached a (un)stable
release yet, so changing it without trying too hard to keep
compatibility seems okay.
|
|
Our error reporting is historically grown into some kind of mess.
A while ago I implemented stacking for the global error which is used in
this commit now to wrap calls to functions which do not report (all)
errors via return, so that only failures in those calls cause a failure
to propergate down the chain rather than failing if anything
(potentially totally unrelated) has failed at some point in the past.
This way we can avoid stopping the entire acquire process just because a
single source produced an error for example. It also means that after
the acquire process the cache is generated – even if the acquire
process had failures – as we still have the old good data around we can and
should generate a cache for (again).
There are probably more instances of this hiding, but all these looked
like the easiest to work with and fix with reasonable (aka net-positive)
effects.
|