Age | Commit message (Collapse) | Author |
|
A keyring file can include multiple keys, so its only fair for
transitions and such to support multiple fingerprints as well.
|
|
We parse the messages we receive into two big categories: Most of the
messages have a keyid as well as a userid and as they are errors we want
to show the userids as well. The other category is also errors, but have
no userid (like NO_PUBKEY). Explicitly expressing this in code should
make it a bit easier to look at and it also help in dropping additional
fields or just the newline at the end consistently.
Git-Dch: Ignore
|
|
Daniel Kahn Gillmor highlights in the bugreport that security isn't
improving by having the user import additional keys – especially as
importing keys securely is hard.
The bugreport was initially about dropping the warning to a notice, but
in given the previously mentioned observation and the fact that we
weren't printing a warning (or a notice) for expired or revoked keys
providing a signature we drop it completely as the code to display a
message if this was the only key is in another path – and is considered
critical.
Closes: 618445
|
|
Signatures on data can have an expiration date, too, which we hadn't
handled previously explicitly (no problem – gpg still has a non-zero
exit code so apt notices the invalid signature) so the error message
wasn't as helpful as it could be (aka mentioning the key signing it).
|
|
The upstream documentation says about KEYEXPIRED:
"This status line is not very useful". Indeed, it doesn't mention which
key is expired, and suggests to use the other message which does.
|
|
This basically introduces ~33 flags in the output, but a package can
have only ~11 of them displayed at the same time. There is quiet a bit
of duplication also (an uninstalled package is by definition a
newinstall if its getting installed), but as this is debug output we are
better of showing them all in case one of them isn't set in a way it is
supposed to be set.
Git-Dch: Ignore
|
|
Redesign of multivalue options in 463c8d801595ce5ac94d7c032264820be7434232
caused the parser to look for <multivalue>{Add,Remove} (no hyphen)
instead of the expected <multivalue>-{Add,Remove}.
|
|
Using Pkg.CandVersion() here is wrong as its implementation will return
a candidate based just on the default policy settings ignoring user
preferences and otherwise set candidates (aka: it sidesteps the
pkgDepCache).
This causes M-A:same libraries to be detected as screwed even through
they aren't, so that they end up being kept back.
Reported-By: Felipe Sateler on IRC
|
|
Broken in the previous commit (69cea1ef2cfda3c4da79fd756a8edaf2be26998e).
Adding a test and a comment to avoid future embarrassment.
Git-Dch: Ignore
Reported-By: Julian Andres Klode on IRC
|
|
Redirection services like httpredir.debian.org tend to use a set of
mirrors from which they pick a mirror at "random" for each requested
file, which is usually benefitial for the download of debs, but for the
index files this can quickly cause problems (aka hashsum mismatches) if
the two (or more) mirrors involved are only slightly out-of-sync.
This commit "resolves" this issue by using the mirror we ended up using
to get the (signed) Release file directly to get the index files
belonging to this Release file instead of asking the redirection
service which eliminates the risk of hitting out-of-sync mirrors.
As an obvious downside the redirection service can't serve partial
mirrors anymore for indexes and the download of indexes indexed in the
same Release file can't be done in parallel (from different mirrors).
This does not effect the download of non-index files like deb-files as
out-of-sync mirrors aren't a huge problem there, so the parallel
download outweights a potentially 404 error (also because this causes no
errenous downloads while hashsum mismatches download the entire file
before finding out that it was pointless).
The rational for this is that indexes are relative to the Release file.
If we would be talking about a HTML page including images, such a
behaviour is obvious and intended – not doing it means in the best case
a bunch of "useless" requests which will all be answered with a
redirect.
|
|
They are the small brothers of the hashsum mismatch, so they deserve a
similar treatment even through we have for architectual reasons not a
much to display as for hashsum mismatches for now.
|
|
Users tend to report these errors with just this error message… not very
actionable and hard to figure out if this is a temporary or 'permanent'
mirror-sync issue or even the occasional apt bug.
Showing the involved hashsums and modification times should help in
triaging these kind of bugs – and eventually we will have less of them
via by-hash.
The subheaders aren't marked for translation for now as they are
technical glibberish and probably easier to deal with if not translated.
After all, our iconic "Hash Sum mismatch" is translated at least.
These additions were proposed in #817240 by Peter Palfrader.
|
|
This is a duplicate of sorts of 0efb29eb36184bbe6de7b1013d1898796d94b171
which is the a lot more frequent case of this error – and also a
duplicate of this error message, just without the \n at the end.
Git-Dch: Ignore
|
|
We have this situation in cases were parts of the transaction are
refused (e.g. in a hashsum mismatch) and rerun the update (e.g. in the
hope that we get a mirror which is synced this time).
Previously we would ask the server with an if-range and in the best case
recieve a 416 in response (less featureful server might end up giving us
the entire file again or we get the wrong file this time giving us a
hashsum mismatch…), which is a waste of time if we know already by
checking the hashsums that we got the complete and correct file.
|
|
Queues feeding workers like rred are created in a random pattern to get
a few of them to run in parallel – but if we already have an idling queue
we don't need to assign it to a (potentially new) random queue as that
saves us the (agruably small) overhead of starting up a new queue,
avoids adding jobs to an already busy queue while others idle and as
a bonus reduces the size of debug logs a bit.
We also keep starting new queues now until we reach our limit before
we assign work at random to them, which should give us a more effective
utilisation overall compared to potentially adding work to busy queues
while we haven't reached our queue limit yet.
|
|
Tested via (newly) empty index files, but effects also files dropped
from the repository or an otherwise changed repository config.
|
|
There is just no point in taking the time to acquire empty files –
especially as it will be tiny non-empty compressed files usually.
|
|
With the previous fix for file applied we can again hit repositories
which contain uncompressed empty files, which since the introduction of
the central store: method wasn't accounted for anymore as we forbid
empty compressed files.
|
|
A silly of-by-one error in the stripping of the extension to check for
the uncompressed filename broken in an attempt to support all
compressions in commit a09f6eb8fc67cd2d836019f448f18580396185e5.
Fixing this highlights also mistakes in the handling of the Alt-Filename
in libapt which would cause apt to remove the file from the repository
(if root has the needed rights – aka the disk isn't readonly or similar)
|
|
With the previous commit we track the state of transactions, so we can
now use our knowledge to avoid processing data for a transaction which
was already closed (via an abort in this case).
This is needed as multiple independent processes are interacting in the
process, so there isn't a simple immediate full-engine stop and it would
also be bad to teach each and every item how to check if its manager
has failed subordinate and what to do in that case.
In the pdiff case, which deals (potentially) with many items during its
lifetime e.g. a hashsum mismatch in another file can abort the
transaction the file we try to patch via pdiff belongs to. This causes
some of the items (which are already done) to be aborted with it, but
items still in the process of acquisition continue in the processing and
will later try to use all the items together failing in strange ways as
cleanup already happened.
The chosen solution is to dry up the communication channels instead by
ignoring new requests for data acquisition, canceling requests which are
not assigned to a queue and not calling Done/Failed on items anymore.
This means that e.g. already started or pending (e.g. pipelined)
downloads aren't stopped and continue as normal for now, but they remain
in partial/ and aren't processed further so the next update command will
pick them up and put them to good use while the current process fails
updating (for this transaction group) in an orderly fashion.
Closes: 817240
Thanks: Barr Detwix & Vincent Lefevre for log files
|
|
Introduces APT::Hashes::<NAME> with entries Untrusted and Weak
which can be set to true to cause the hash to be treated as
untrusted and/or weak.
|
|
Use msgtest and testsuccess with a function instead of failing
with a simple exit 1. This looks nicer.
Gbp-Dch: ignore
|
|
This gets rid of byte-range requests and 416 responses.
Gbp-Dch: ignore
|
|
This should make the test less flaky, as with a small file,
we might have already received all the data before trying
to apply rate limits which is a constant source of failure
on the i386 Ubuntu autopkgtest.
|
|
If the package is marked for removal, keep it marked for
removal and do not mark it for keep. If we mark it for keep,
we some how later get to a different stage where it is marked
for unpack instead of removal.
In the example in the bug report, we would get a:
SmartUnPack maas-region-controller-min:amd64 (replace version 2.0.0~alpha3+bzr4810-0ubuntu1 with Segmentation fault
maas-region-controller-min:amd64 was marked for removal, but
we changed it to keep and somehow it thinks that this is to
be replaced now instead of removed (probably because the
InstallVer != CandidateVer [with InstallVer = 0]).
This fixes a regression introduced in release 1.2.7, commit:
0390edd5452b081f8efcf412f96d535a1d959457
Reported-by: LaMont Jones on IRC
LP: #1562402
|
|
|
|
Our own gpgv method can declare a digest algorithm as untrusted and
handles these as worthless signatures. If gpgv comes with inbuilt
untrusted (which is called weak in official terminology) which it e.g.
does for MD5 in recent versions we should handle it in the same way.
To check this we use the most uncommon still fully trusted hash as a
configureable one via a hidden config option to toggle through all of
the three states a hash can be in.
|
|
Using erase(pos) is invalid in our case here as pos must be a valid and
derefenceable iterator, which isn't the case for an end-iterator (like
if we had no good signature).
The problem runs deeper still through as VALIDSIG is a keyid while
GOODSIG is just a longid so comparing them will always fail.
Closes: 818910
|
|
On launchpad #1558484 a user reports that @ in the authentication tokens
parsing of sources.list isn't working in an older (precise) version. It
isn't the recommended way of specifying passwords and co (auth.conf is),
but we can at least test for regressions (and in this case test at all…
who was that "clever" boy disabling a test with exit……… oh, nevermind.
Git-Dch: Ignore
|
|
Otherwise, things will just start failing later down the stack,
because (a) the lazy getters do not check if building was successful
and (b) any further getter call would return the invalid object
anyway.
Also initialize VS in pkgCache to nullptr by default.
Closes: #818628
|
|
There is no point in resolving all addresses to their names, this
just seriously slows the setup phase down. So just pass -n to not
resolve names anymore.
Gbp-Dch: ignore
|
|
This should make the test less flaky and hopefully fix the failure
on Ubuntu's armhf CI nodes.
Gbp-Dch: ignore
|
|
The test is a bit flaky. In order to get it less flaky, reduce
the speed in each run. To compensate for issues, start with a
higher speed level. Also increase the number of runs to 10.
Furthermore, http get the same multiple-run loop, and the log
files are changed to indicate the protocol being tested, as it's
not obvious which one fails if it fails in quiet mode.
Gbp-Dch: ignore
|
|
The epoch stripping in this code is done since day one, but in other
places we show a version epochs are not stripped. If epochs are present
in packages they tend to be an important information which we can't just
drop and especially can't drop "sometimes" as that confuses users and
tools alike – so even if removing code in use for (close to) 18 years
feels wrong, it is probably the right choice for consistency.
Closes: 818162
|
|
This makes it easier to understand what really is an error
and what not.
|
|
For the non-pdiff case, we have can have accurate progress
reporting because after fetching the {,In}Release files we know
how many IndexFiles will be fetched and what size they have.
Therefore init the filesize early (in pkgAcqIndex::Init) and
ensure that in Acquire::Pulse() looks at already downloaded
bits when calculating the progress in Acquire::Pulse.
Also improve debug output of Debug::acquire::progress
|
|
Git-Dch: Ignore
|
|
The problemresolver will set the candidate version for pkg P back
to the current version if it encounters an impossible to satisfy
critical dependency on P. However it did not set the State of
the package back as well which lead to a situation where P is
neither in Keep,Install,Upgrade,Delete state.
Note that this can not be tested via the traditional sh based
framework. I added a python-apt based test for this.
LP: #1550741
[jak@debian.org: Make the test not fail if apt_pkg cannot be
imported]
|
|
This was wrong and caused some issues because apt-key invoked
host apt-config with our library.
Gbp-Dch: ignore
|
|
This makes the test suite safe if we ever need to reject SHA1
signatures in an update.
|
|
The structure we parse the data into has a dedicated size field, but it
tends to be easier to handle it as a (very weak) checksum.
|
|
The URI descibing an item can change via mirrors/redirectors which
causes the .diff/Index files to get the wrong names in storage.
Git-Dch: Ignore
|
|
All other interactions with std::cout are flushed directly, just in the
stop case we hadn't done it – no problem expect if there is still output
coming after apt is done like in the case of a post-invoke script
producing output.
Closes: 793672
|
|
Now that we ignore SHA1-only files it makes sense to require also the
provision of hashes for the compressed patches as this was introduced in
the same patchset as support for non-SHA1 hashes in the file itself in
dak and adding support in other archive creators (if they support pdiffs
at all) will likely be in the same batch.
The reason for the change itself is simple: If you are 'scared' enough
about the security of SHA1, you shouldn't uncompress a file you haven't
verified at all – after all, it could be exploiting a bug or a zip bomb.
|
|
Given that we refuse to use SHA1-only .diff/Indexes no point in shipping
and running code which pretends to check support for it which given that
all these tests are run 3 times eats a noticeable amount of time.
Git-Dch: Ignore
|
|
Ensure that .diff/Index files that only contain SHA1 values and no
SHA2 values are not used.
|
|
SHA1 is not reasonably secure anymore, so we should not consider it
usable anymore. The test suite is adjusted to account for this.
|
|
Using amd64 broke the test case on non-amd64 architectures. Query
the native architecture from dpkg and use that instead.
The definition of NATIVE is copied from the test
test-architecture-specification-parsing.
|
|
This effectively merges branch 'typofixes-vlajos-20150807' of github.com:vlajos/apt
with the following commit:
commit 13cacb3e2e2352ba701e769fc889e3344fabbf7e
Author: Veres Lajos <vlajos@gmail.com>
Date: Sun Aug 9 00:12:53 2015 +0100
typofix - https://github.com/vlajos/misspell_fixer
It has been rebased for a better commit message.
|
|
If a single pdiff fails, we have to fail the entire patching endeavour
and fall back to getting the complete file instead. That is easy in
serverside merged pdiffs as we get them one by one. For clientside we
get them all at once through, which means that a failure in one has to
stop the entire pipeline, which works as expected (as proven by the
bugreporters as they don't even notice it happening). The problem is
just that the first failing pdiff will do the cleanup, so another pdiff
which happens to be successfully acquired after we processed the failure
doesn't find the file it is supposed to use as a basename anymore, so
the patch is renamed to what should be the unique extension and moved
into the current working directory. Processing is then stopped as the
patch realizes that it isn't the last one which completed downloading.
On the plus side this means this is neither us using a bad temporary
location nor a security problem. It "just" overrides unconditionally
files in your current working directory (if you happen to have them
named like a pdiff patch – a bit unlikely perhaps) and so drops files
there which are never used again.
I guess this was introduced in 4e3c5633b1e74b4f58b95f339cfbbf4cbf21ab3e
for real as I made the need for the existence of the base file rather
explicit, but the potential lingers in the code for far longer.
Closes: #816837
|