Age | Commit message (Collapse) | Author |
|
Julian noticed on IRC that I fall victim to a lovely false friend by
calling referring to a 'planer' all the time even through these are
machines to e.g. remove splinters from woodwork ("make stuff plane").
The term I meant is written in german in this way (= with a single n)
but in english there are two, aka: 'planner'.
As that is unreleased code switching all instances without any
transitional provisions. Also the reason why its skipped in changelog.
Thanks: Julian Andres Klode
Gbp-Dch: Ignore
|
|
In 385d9f2f23057bc5808b5e013e77ba16d1c94da4 I implemented the storage of
scenario files based on enabling this by default for EIPP, but I
implemented it first optionally for EDSP to have it independent.
The reasons mentioned in the earlier commit (debugging and bugreports)
obviously apply here, especially as EIPP solutions aren't user approved,
nearly impossible to verify before starting the execution and at the
time of error the scenario has changed already, so that reproducing the
issue becomes hard(er).
|
|
Testing the current implementation can benefit from being able to be
feed an EIPP request and produce a fully compliant response. It is also
a great test for EIPP in general.
|
|
The very first step in introducing the "external installation planer
protocol" (short: EIPP) as part of my GSoC2016 project.
The description reads: APT-based tools like apt-get, aptitude, synaptic,
… work with the user to figure out how their system should look like
after they are done installing/removing packages and their dependencies.
The actual installation/removal of packages is done by dpkg with the
constrain that dependencies must be fulfilled at any point in time (e.g.
to run maintainer scripts).
Historically APT has a super micro-management approach to this task
which hasn't aged that well over the years mostly ignoring changes in
dpkg and growing into an unmaintainable mess hardly anyone can debug and
everyone fears to touch – especially as more and more requirements are
tacked onto it like handling cycles and triggers, dealing with
"important" packages first, package sources on removable media, touch
minimal groups to be able to interrupt the process if needed (e.g.
unattended-upgrades) which not only sky-rocket complexity but also can
be mutually exclusive as you e.g. can't have minimal groups and minimal
trigger executions at the same time.
|
|
This effects only compressors configured on the fly (rather then the
inbuilt ones as they use a library).
|
|
Most (if not all) solvers should be able to run perfectly fine without
root privileges as they get the entire state they are supposed to work
on via stdin and do not perform any action directly, but just pass
suggestions on via stdout.
The new default is to run them all as _apt hence, but each solver can
configure another user if it chooses/must. The security benefits are
minimal at best, but it helps preventing silly mistakes (see
35f3ed061f10a25a3fb28bc988fddbb976344c4d) and that is always good.
Note that our 'apt' and 'dump' solver already dropped privileges if they
had them.
|
|
gpg doesn't give use a UID on NODATA, which we were "expecting" (but not
using for anything), but just an error number. Instead of collecting
these as badsigners which will trigger a "invald signature" error with
remarks like "NODATA 1" we instead adapt a message similar to the NODATA
error of a clearsigned file (which is actually not reached anymore as we
split them up, which fails with a NOSPLIT error, which uses the same
general error message).
In other words: Not a security relevant change, just a user experience
improvement as we now point them to the most likely cause of the
problem instead of saying "invalid signature" which would point them in
the direction of the archive being broken (for everyone) instead.
Closes: 823746
|
|
Most tests just need a signed repository and don't care if it signed by
an InRelease file or a Release.gpg file, so we can save some time by
just generating one of them by default.
Sounds like not much, but quickly adds up to a few seconds with the
amount of tests we have accumulated by now.
Git-Dch: Ignore
|
|
If the test just signs release files to throw away one of them to test
the other, we can just as well save the time and not create it.
Git-Dch: Ignore
|
|
Daniel Kahn Gillmor highlights in the bugreport that security isn't
improving by having the user import additional keys – especially as
importing keys securely is hard.
The bugreport was initially about dropping the warning to a notice, but
in given the previously mentioned observation and the fact that we
weren't printing a warning (or a notice) for expired or revoked keys
providing a signature we drop it completely as the code to display a
message if this was the only key is in another path – and is considered
critical.
Closes: 618445
|
|
Signatures on data can have an expiration date, too, which we hadn't
handled previously explicitly (no problem – gpg still has a non-zero
exit code so apt notices the invalid signature) so the error message
wasn't as helpful as it could be (aka mentioning the key signing it).
|
|
Users tend to report these errors with just this error message… not very
actionable and hard to figure out if this is a temporary or 'permanent'
mirror-sync issue or even the occasional apt bug.
Showing the involved hashsums and modification times should help in
triaging these kind of bugs – and eventually we will have less of them
via by-hash.
The subheaders aren't marked for translation for now as they are
technical glibberish and probably easier to deal with if not translated.
After all, our iconic "Hash Sum mismatch" is translated at least.
These additions were proposed in #817240 by Peter Palfrader.
|
|
We have this situation in cases were parts of the transaction are
refused (e.g. in a hashsum mismatch) and rerun the update (e.g. in the
hope that we get a mirror which is synced this time).
Previously we would ask the server with an if-range and in the best case
recieve a 416 in response (less featureful server might end up giving us
the entire file again or we get the wrong file this time giving us a
hashsum mismatch…), which is a waste of time if we know already by
checking the hashsums that we got the complete and correct file.
|
|
There is just no point in taking the time to acquire empty files –
especially as it will be tiny non-empty compressed files usually.
|
|
Using erase(pos) is invalid in our case here as pos must be a valid and
derefenceable iterator, which isn't the case for an end-iterator (like
if we had no good signature).
The problem runs deeper still through as VALIDSIG is a keyid while
GOODSIG is just a longid so comparing them will always fail.
Closes: 818910
|
|
On launchpad #1558484 a user reports that @ in the authentication tokens
parsing of sources.list isn't working in an older (precise) version. It
isn't the recommended way of specifying passwords and co (auth.conf is),
but we can at least test for regressions (and in this case test at all…
who was that "clever" boy disabling a test with exit……… oh, nevermind.
Git-Dch: Ignore
|
|
There is no point in resolving all addresses to their names, this
just seriously slows the setup phase down. So just pass -n to not
resolve names anymore.
Gbp-Dch: ignore
|
|
The problemresolver will set the candidate version for pkg P back
to the current version if it encounters an impossible to satisfy
critical dependency on P. However it did not set the State of
the package back as well which lead to a situation where P is
neither in Keep,Install,Upgrade,Delete state.
Note that this can not be tested via the traditional sh based
framework. I added a python-apt based test for this.
LP: #1550741
[jak@debian.org: Make the test not fail if apt_pkg cannot be
imported]
|
|
This was wrong and caused some issues because apt-key invoked
host apt-config with our library.
Gbp-Dch: ignore
|
|
This makes the test suite safe if we ever need to reject SHA1
signatures in an update.
|
|
SHA1 is not reasonably secure anymore, so we should not consider it
usable anymore. The test suite is adjusted to account for this.
|
|
This way we hopefully notice (new) warnings in this little helper.
Git-Dch: Ignore
|
|
|
|
The Date field in the Release file is useful to avoid allowing an
attacker to 'downgrade' a user to earlier Release files (and hence to
older states of the archieve with open security bugs). It is also needed
to allow a user to define min/max values for the validation of a Release
file (with or without the Release file providing a Valid-Until field).
APT wasn't formally requiring this field before through and (agrueable
not binding and still incomplete) online documentation declares it
optional (until now), so we downgrade the error to a warning for now to
give repository creators a bit more time to adapt – the bigger ones
should have a Date field for years already, so the effected group should
be small in any case.
It should be noted that earlier apt versions had this as an error
already, but only showed it if a Valid-Until field was present (or the
user tried to used the configuration items for min/max valid-until).
Closes: 809329
|
|
In 321213f0dcdcdaab04e01663e7a047b261400c9c Andreas Cadhalpun corrected
the incorrect overriding of earlier better-fitting results with later
(semi-)matches – but that broke the case in which packages are in multiple
releases in the same version (and the user has both releases configured).
Closes: 812497
|
|
Some (older) versions of bash seem to be allergic to a method named
"aptautotest_grep_^apt" (note the caret). Unlikely that we are going to
write autotests for such commands so we could just skip those, but lets
instead just use "normal" characters in the names and strip the rest as
we already did with the (arguable more common) '-'.
|
|
This way it works more similar to the compressor binaries, which we
can relief in this way from their job in the test framework avoiding the
need of adding e.g. liblz4-tool to the test dependencies.
|
|
Downloading and storing are two different operations were different
compression types can be preferred. For downloading we provide the
choice via Acquire::CompressionTypes::Order as there is a choice to
be made between download size and speed – and limited by whats available
in the repository.
Storage on the other hand has all compressions currently supported by
apt available and to reduce runtime of tools accessing these files the
compression type should be a low-cost format in terms of decompression.
apt traditionally stores its indexes uncompressed on disk, but has
options to keep them compressed. Now that apt downloads additional files
we also deal with files which simply can't be stored uncompressed as
they are just too big (like Contents for apt-file). Traditionally they
are downloaded in a low-cost format (gz) as repositories do not provide
other formats, but there might be even lower-cost formats and for
download we could introduce higher-cost in the repositories.
Downloading an entire index potentially requires recompression to
another format, so an update takes potentially longer – but big files
are usually updated via pdiffs which has to de- and re-compress anyhow
and does it on the fly anyhow, so there is no extra time needed and in
general it seems to be benefitial to invest the time in update to save
time later on file access.
|
|
Less hardcoding should help while introducing new compressors.
Git-Dch: Ignore
|
|
The output changes slightly between different versions, which we already
dealt with in the main testcase for apt-key, but there are two more
which do not test both versions explicitly and so still had gpg1 output
to check against as this is the default at the moment.
Git-Dch: Ignore
|
|
apt-key creates internally a script (since ~1.1) which it will call to
avoid dealing with an array of different options in the code itself, but
while writing this script it wraps the values in "", which will cause
the shell to evaluate its content upon execution.
To make 'use' of this either set a absolute gpg command or TMPDIR to
something as interesting as:
"/tmp/This is fü\$\$ing cràzy, \$(man man | head -n1 | cut -d' ' -f1)\$!"
If such paths can be encountered in reality is a different question…
|
|
This doesn't allow all tests to run cleanly, but it at least allows to
write tests which could run successfully in such environments.
Git-Dch: Ignore
|
|
Use asprintf() so we have easy error detection and do not depend
on PATH_MAX.
Do not add another separator to the generated path, in both cases
the path inside the chroot is guaranteed to have a leading /
already.
Also pass -Wall to gcc.
|
|
This caused test-bug-717891-abolute-uris-for-proxies to fail
Gbp-Dch: ignore
|
|
This breaks a lot of test cases
Gbp-Dch: ignore
|
|
The allocated buffer was one byte too small. Allocate a buffer
of PATH_MAX instead and use snprintf(), as suggested by Martin
Pitt.
|
|
Trying to clean up directories which do not exist seems rather silly if
you think about it, so let apt think about it and stop it.
Depends a bit on the caller if this is fixing anything for them as they
might try to acquire a lock or doing other clever things as apt does.
Closes: 807477
|
|
If we can't work with the hashes we parsed from the Release file we
display now an error message if the Release file includes only weak
hashes instead of downloading the indexes and failing to verify them
with "Hash Sum mismatch" even through the hashes didn't mismatch (they
were just weak).
If for some (unlikely) reason we have got weak hashes only for
individual targets we will show a warning to this effect (again, befor
downloading and failing the index itself).
Closes: 806459
|
|
which is a debian specific tool packaged in debianutils (essential)
while command is a shell builtin defined by POSIX.
Closes: 807144
Thanks: Mingye Wang for the suggestion.
|
|
Git-Dch: Ignore
|
|
Dropping privileges is an involved process for code and system alike so
ideally we want to verify that all the work wasn't in vain. Stuff
designed to sidestep the usual privilege checks like fakeroot (and its
many alternatives) have their problem with this through, partly through
missing wrapping (#806521), partly as e.g. regaining root from an
unprivileged user is in their design. This commit therefore disables
most of these checks by default so that apt runs fine again in a
fakeroot environment.
Closes: 806475
|
|
debci seems to have a cleaner environment now and even if not we could
never guess nogroup, so figure it out properly via 'id'.
Git-Dch: Ignore
|
|
In ce1f3a2c we started warning about failing unlinking, which we
consistently do for directories. That isn't a problem as directories
usually aren't in the places we do want to clean up – with the potential
exeception of "lost+found", so lets ignore it like we ignore our own
partial/ subdirectory.
Closes: 805424
|
|
Git-Dch: Ignore
|
|
space-gapping: '-o option= value'
That is a very old feature (straight from 1998), but it is super
surprising if you try setting empty values and instead get error
messages or a non-empty value as the next parameter is treated as the
value – which could have been empty, so if for some reason you need a
compatible way of setting an empty value try: '-o option="" ""'.
I can only guess that the idea was to support '-o option value', but we
survived 17 years without it, we will do fine in the future I guess.
Similar is the case for '-t= testing' even through '-t testing' existed
before and the code even tried to detect mistakes like '-t= -b' … all
gone now.
Technically that is as its removing a feature replacing it with another
a major interface break. In practice I really hope for my and their
sanity that nobody was using this; but if for some reaon you do: Remove
the space and be done.
I found the patch and the bugreport actually only after the fact, but
its reassuring that others are puzzled by this as well and hence a
thanks is in perfect order here as the patch is practical identical
[expect that this one here adds tests and other bonus items].
Thanks: Daniel Hartwig for initial patch.
Closes: 693092
|
|
Notices are just hints, but if they are printed in tests, they should be
expected and if not the test should fail. No current test has this
problem, so that is just potential future proving.
Git-Dch: Ignore
|
|
Allows users who know what they are getting themselves into with this
trick to e.g. disable privilege dropping for e.g. file:// until they can
fix up the permissions on those repositories. It helps also the test
framework and people with a similar setup (= me) to run in less modified
environments.
|
|
This flags is generally handy to avoid having to deal with ipv6 results on an
ipv4-only system, but it prevents e.g. the testcases from working if the
testsystem has no configured address at the moment (expect loopback), so
allow it to be sidestepped and let the testcases sidestep it.
Git-Dch: Ignore
|
|
Based on a discussion with Niels Thykier who asked for Contents-all this
implements apt trying for all architecture dependent files to get a file
for the architecture all, which is treated internally now as an official
architecture which is always around (like native). This way arch:all
data can be shared instead of duplicated for each architecture requiring
the user to download the same information again and again.
There is one problem however: In Debian there is already a binary-all/
Packages file, but the binary-any files still include arch:all packages,
so that downloading this file now would be a waste of time, bandwidth
and diskspace. We therefore need a way to decide if it makes sense to
download the all file for Packages in Debian or not. The obvious answer
would be a special flag in the Release file indicating this, which would
need to default to 'no' and every reasonable repository would override
it to 'yes' in a few years time, but the flag would be there "forever".
Looking closer at a Release file we see the field "Architectures", which
doesn't include 'all' at the moment. With the idea outlined above that
'all' is a "proper" architecture now, we interpret this field as being
authoritative in declaring which architectures are supported by this
repository. If it says 'all', apt will try to get all, if not it will be
skipped. This gives us another interesting feature: If I configure a
source to download armel and mips, but it declares it supports only
armel apt will now print a notice saying as much. Previously this was a
very cryptic failure. If on the other hand the repository supports mips,
too, but for some reason doesn't ship mips packages at the moment, this
'missing' file is silently ignored (= that is the same as the repository
including an empty file).
The Architectures field isn't mandatory through, so if it isn't there,
we assume that every architecture is supported by this repository, which
skips the arch:all if not listed in the release file.
|
|
apt is an interactive command and the reasons we haven't this option set
for everything is mostly in keeping compatibility for a little while
longer to allow scripts to be changed if need be.
|