Age | Commit message (Collapse) | Author |
|
Git-Dch: Ignore
|
|
Weak had no dedicated option before and Insecure and Downgrade were both
global options, which given the effect they all have on security is
rather bad. Setting them for individual repositories only isn't great
but at least slightly better and also more consistent with other
settings for repositories.
|
|
For "Hash Sum mismatch" that info doesn't make a whole lot of
difference, but for the new insufficient info message an indicator that
while this hashes are there and even match, they aren't enough from a
security standpoint.
|
|
Downloading and saying "Hash Sum mismatch" isn't very friendly from a
user POV, so with this change we try to detect such cases early on and
report it, preferably before download even started.
Closes: 827758
|
|
Handling the extra check (and force requirement) for downgrades in
security in our AllowInsecureRepositories checker helps in having this
check everywhere instead of just in the most common place and requiring
a little extra force in such cases is always good.
|
|
APT can be forced to deal with repositories which have no security
features whatsoever, so just giving up on repositories which "just" fail
our current criteria of good security features is the wrong incentive.
Of course, repositories are better of fixing their setup to provide the
minimum of security features, but sometimes this isn't possible:
Historic repositories for example which do not change (anymore).
That also fixes problem with repositories which are marked as trusted,
but are providing only weak security features which would fail the
parsing of the Release file.
Closes: 827364
|
|
Unsecure repositories result in error messages by default which causes
the acquire run to fail hard, but non-failing repositories are still
updated just like in the slightly less hard-failures which got this
behaviour in 35664152e47a1d4d712fd52e0f0a2dc8ed359d32.
|
|
There is a subtile difference between an empty setting and "DIRECT" in
the configuration as the later overrides the generic settings while the
earlier does not. Also, non-zero exitcodes should really be reported as
an error rather than silently discarded.
|
|
Regression introduced in 8f858d560e3b7b475c623c4e242d1edce246025a.
Commands are probably better of always having output through as the
fall through to the generic proxy settings is likely not intended. As
documenting and implementing this more consistently is kind of a
regression through, it is split off into the next commit.
Closes: 827713
|
|
As reported upstream in
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71556
the implementation of std::get_time is currently not as accepting as
strptime is, especially in how hours should be formatted.
Just reverting 9febc2b238e1e322dce1f94ecbed46d595893b52 would be
possible, but then we would reopen the problems fixed by it, so instead
I opted here for a rewrite of the parsing logic which makes this method
a lot longer, but at least it provides the same benefits as the rewrite
in std::get_time was intended to give us and decouples us from the fix
of the issue in the standard library implementation of GCC.
LP: 1593583
|
|
Merging by URI means that having sources lines with different URI
methods results in 'strange' warning and error messages, which aren't
very friendly from a user point of view as not encoding the method in
the filename is effectivly an implementation detail.
Merging by filename removes these messages and makes everything "work"
even if it isn't working the way it is configured as the indexes aren't
acquired over the method given, but over the first method for this
release file (which argueably is an implementation detail stemming from
the filename encoding, too).
So either direction isn't perfectly "right", but personally I prefer
"magic" over strange error messages (and doing a full-circle detection
of this with its own messages which would need to be translated feels
like way too much effort for dubious gain).
Closes: 826944
|
|
Seen first in #826783, but as this buglog also shows leaked uncompressed
files as well we don't close it just yet.
|
|
This effects only compressors configured on the fly (rather then the
inbuilt ones as they use a library).
|
|
We end our operation by calling "dpkg --configure -a", so instead of
running a (big) configure run with all packages mentioned explicitly
before this, we simply skip them and let them be handled by this call
implicitly.
There isn't really an observeable gain to be had here from a speed
point, but it helps in avoiding an (uncommon) problem of having a too
long commandline passed to dpkg, which we would split up (probably
incorrectly).
|
|
Most (if not all) solvers should be able to run perfectly fine without
root privileges as they get the entire state they are supposed to work
on via stdin and do not perform any action directly, but just pass
suggestions on via stdout.
The new default is to run them all as _apt hence, but each solver can
configure another user if it chooses/must. The security benefits are
minimal at best, but it helps preventing silly mistakes (see
35f3ed061f10a25a3fb28bc988fddbb976344c4d) and that is always good.
Note that our 'apt' and 'dump' solver already dropped privileges if they
had them.
|
|
For bugreports and co it could be handy to have the scenario and all the
settings used in it around later for inspection for EDSP like protocols.
EDSP might not be the most interesting as the user can still interrupt
the process before the solution is applied and users tend to have an
opinion on the "rightness" of a solution, so it is disabled by default.
|
|
Currently an EDSP solver gets send basically all versions which means
the absolute count is the same, but that might not be true forever (and
with the skipping of rc-only versions it kinda is already) and even if
it were true, segfaulting on bad input seems wrong.
|
|
First seen on hurd, but easily reproducible on all systems by removing
the 'execution' bit from the current working directory and watching some
tests (mostly the no-output expecting tests) fail due to find printing:
"find: Failed to restore initial working directory: …"
Samuel Thibault says in the bugreport:
| To do its work, find first records the $PWD, then goes to
| /etc/apt/trusted.gpg.d/ to find the files, and then goes back to $PWD.
|
| On Linux, getting $PWD from the 700 directory happens to work by luck
| (POSIX says that getcwd can return [EACCES]: Search permission was denied
| for the current directory, or read or search permission was denied for a
| directory above the current directory in the file hierarchy). And going
| back to $PWD fails, and thus find returns 1, but at least it emitted its
| output.
|
| On Hurd, getting $PWD from the 700 directory fails, and find thus aborts
| immediately, without emitting any output, and thus no keyring is found.
|
| So, to summarize, the issue is that since apt-get update runs find as a
| non-root user, running it from a 700 directory breaks find.
Solved as suggested by changing to '/' before running find, with some
paranoia extra care taking to ensure the paths we give to find are really
absolute paths first (they really should, but TMPDIR=. or a similar
Dir::Etc::trustedparts setting could exist somewhere in the wild).
The commit takes also the opportunity to make these lines slightly less
error ignoring and the two find calls using (mostly) the same parameters.
Thanks: Samuel Thibault for 'finding' the culprit!
Closes: 826043
|
|
In 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf changing to usage of C++ way
of setting the locale causes us to be terminated in case of usage of an
ungenerated locale as LC_ALL (or similar) – but we don't want to fail
here, we just want to carry on as before with setlocale which we call in
that case just for good measure.
|
|
HTTP/1.1 hardcodes GMT (RFC 7231 §7.1.1.1) and what is good enough for the
internet must be good enough for us™ as we reuse the implementation
internally to parse (most) dates we encounter in various places like the
Release files with their Date and Valid-Until header fields.
Implementing a fully timezone aware parser just feels too hard for no
effective benefit as it would take 5+ years (= until LTS's are out of
fashion) until a repository could use non-UTC dates and expect it to
work. Not counting non-apt implementations which might or might not
only want to encounter UTC here as well.
As a bonus, this eliminates the use of an instance of setlocale in
libapt.
Closes: 819697
|
|
Setting the C++ locale via std::locale::global(std::locale("")); which
would otherwise default to the default C locale (aka: unaffected by
setlocale) effects the formatting of numeric types in IO streams, which
for output for humans is perfectly sensible, but breaks our many text
interfaces used and parsed by us and others without expecting the
numbers to be formatted.
Closes: #825396
|
|
libapt allows to configure compressors to be used by its system via
configuration implemented in 03bef78461c6f443187b60799402624326843396,
but that was never really documented and also only partly working, which
also explains why the tests weren't using it…
|
|
No real code change, just moving code around heavily to decouple the
EDSP specific parts from those we can reuse for EDSP-like protocols.
Git-Dch: Ignore
|
|
The report mentions "apt list --upgradable", but there are others which
have inconsistent behavior ranging from segfaulting to doing something
with the partial (and hence incomplete) data. We had a recent report
about sources.list (#818628), this one mentions prefences, the obvious
next step is conf files… so the testcase is adapted to check for all
three in file and directory versions and run a bunch of commands each
time which should all have more or less the same behavior in such a case
(aka error out).
Closes: 824503
|
|
This commit moves the creation of the fetcher and with it the
calculation of the filenames before the code generation the various
lists detailing the solution. This means that simulation comes even so
slightly closer to a real run as it will require and parse the package
indexes for filenames and queuing of URIs, so that a simulation "using"
an unavailable download method actually fails now.
The real benefit of this change is through that the rather special but
nontheless handy --no-download --fix-missing mode now actually shows
what the solution is it will apply to the system rather than the
solution it would if it could download all not-downloaded packages.
|
|
Errors cause a kind of automatic no already, but warnings and notices
are only displayed at the end of the apt execution even through they
could effect the choice of saying yes/no to questions: E.g. if a
configuration (file) was ignored you wanted to have an effect or if an
external solver you used generated warnings suggesting that the solution
might be valid, but bogus non-the-less and similar things.
Note that this only moves those messages up to the question if the
answer is interactive – not if e.g. -y is used or no question is asked at
all so this has an effect only on interactive usage of apt(-get), not
script who might be parsing apt output.
|
|
This fixes comparisons where either the stored or the input string
have a trailing comma.
|
|
Unexpected are for examples removal requests for versions which aren't
installed, installations of already installed versions & requests to
install and remove a package at the same time.
|
|
Failures can happen and APT regardless will do a partial cache
update anyway. Because APT ensures that the list directory is
in a sane state, it makes sense to also call success hooks if
success was only partial - otherwise it loses sync with APT.
Most importantly, this causes the appstream cache to be empty,
see launchpad bug #1562733.
This is somewhat overly optimistic though: As soon as any repository
has nonexisting optional files, the missing optional files are also
treated as success, which means a single broken repository without an
InRelease file still runs Success hooks, even though it really should
not.
|
|
This prevented some sources.list entries from working, an example
of which can be found in the test.
|
|
Versions which are only available in dpkg/status aren't installable and
apt doesn't pick them as candidate for this reason – for the same reason
such packages shouldn't be sent to an external solver via EDSP. The
packages are pinned to -1, but if the solver has strict pinning disabled
it could end up picking this version anyhow – which is a request apt can
not satisfy.
Reported-By: Maximiliano Curia <maxy@debian.org> on IRC
|
|
gpg doesn't give use a UID on NODATA, which we were "expecting" (but not
using for anything), but just an error number. Instead of collecting
these as badsigners which will trigger a "invald signature" error with
remarks like "NODATA 1" we instead adapt a message similar to the NODATA
error of a clearsigned file (which is actually not reached anymore as we
split them up, which fails with a NOSPLIT error, which uses the same
general error message).
In other words: Not a security relevant change, just a user experience
improvement as we now point them to the most likely cause of the
problem instead of saying "invalid signature" which would point them in
the direction of the archive being broken (for everyone) instead.
Closes: 823746
|
|
A frontend like apt-file is only interested in a specific set of files
and selects those easily via "Created-By". If it supports two locations
for those files through it would need to select both and a user would
need to know that implementation detail for sources.list configuration.
The "Identifier" field is hence introduced which by default has the same
value as "Created-By", but can be freely configured – especially it can
be used to give two indexes the same identifier.
|
|
Sometimes index files are in different locations in a repository as it
is currently the case for Contents files which are per-component in
Debian, but aren't in Ubuntu. This has historic reasons and is perhaps
changed soon, but such cases of transitions can always happen in the
future again, so we should prepare:
Introduced is a new field declaring that the current item should only be
downloaded if the mentioned item wasn't allowing for transitions without
a flagday in clients and archives.
This isn't implemented 'simpler' with multiple MetaKeys as items (could)
change their descriptions and perhaps also other configuration bits with
their location.
|
|
It looks a bit strange on the outside to have multiple "native
architecture", but all is considered an implementation detail and e.g.
packages of arch:all are in dependency resolution equal to native
packages.
|
|
Commit 9b8034a9fd40b4d05075fda719e61f6eb4c45678 just deals with
InRelease properly and generates broken URIs in case the mirror (or the
achieve really) has no InRelease file.
[As this was in no released version no need to clutter changelog with a
fix notice.]
Git-Dch: Ignore
|
|
Most tests just need a signed repository and don't care if it signed by
an InRelease file or a Release.gpg file, so we can save some time by
just generating one of them by default.
Sounds like not much, but quickly adds up to a few seconds with the
amount of tests we have accumulated by now.
Git-Dch: Ignore
|
|
If the test just signs release files to throw away one of them to test
the other, we can just as well save the time and not create it.
Git-Dch: Ignore
|
|
Always those silly mistakes. Do what I mean, not what I said…
Reported-By: Travis
Git-Dch: Ignore
|
|
Broken in a4b8112b19763cbd2c12b81d55bc7d43a591d610.
If an item has a description which includes no space and is redirected
to another mirror the code which wants to rewrite the description
expects a space in there, but can't find it and the unguarded substr
command on the string will fail with an exception thrown…
Guarding it properly and everything is fine.
|
|
We want to stop hard-depending on gnupg and for this it is essential
that apt-key isn't used in any critical execution path, which
maintainerscript are. Especially as it is likely that these script call
apt-key either only for (potentially now outdated cleanup) or still not
use the much simpler trusted.gpg.d infrastructure.
|
|
Users have the option since apt >= 1.1 to enforce that a Release file is
signed with specific key(s) either via keyring filename or fingerprints.
This commit adds an entry with the same name and value (except that it
doesn't accept filenames for obvious reasons) to the Release file so
that the repository owner can set a default value for this setting
effecting the *next* Release file, not the current one, which provides a
functionality similar "HTTP Public Key Pinning". The pinning is in
effect as long as the (then old) Release file is considered valid, but
it is also ignored if the Release file has no Valid-Until at all.
|
|
A keyring file can include multiple keys, so its only fair for
transitions and such to support multiple fingerprints as well.
|
|
We parse the messages we receive into two big categories: Most of the
messages have a keyid as well as a userid and as they are errors we want
to show the userids as well. The other category is also errors, but have
no userid (like NO_PUBKEY). Explicitly expressing this in code should
make it a bit easier to look at and it also help in dropping additional
fields or just the newline at the end consistently.
Git-Dch: Ignore
|
|
Daniel Kahn Gillmor highlights in the bugreport that security isn't
improving by having the user import additional keys – especially as
importing keys securely is hard.
The bugreport was initially about dropping the warning to a notice, but
in given the previously mentioned observation and the fact that we
weren't printing a warning (or a notice) for expired or revoked keys
providing a signature we drop it completely as the code to display a
message if this was the only key is in another path – and is considered
critical.
Closes: 618445
|
|
Signatures on data can have an expiration date, too, which we hadn't
handled previously explicitly (no problem – gpg still has a non-zero
exit code so apt notices the invalid signature) so the error message
wasn't as helpful as it could be (aka mentioning the key signing it).
|
|
The upstream documentation says about KEYEXPIRED:
"This status line is not very useful". Indeed, it doesn't mention which
key is expired, and suggests to use the other message which does.
|
|
This basically introduces ~33 flags in the output, but a package can
have only ~11 of them displayed at the same time. There is quiet a bit
of duplication also (an uninstalled package is by definition a
newinstall if its getting installed), but as this is debug output we are
better of showing them all in case one of them isn't set in a way it is
supposed to be set.
Git-Dch: Ignore
|
|
Redesign of multivalue options in 463c8d801595ce5ac94d7c032264820be7434232
caused the parser to look for <multivalue>{Add,Remove} (no hyphen)
instead of the expected <multivalue>-{Add,Remove}.
|
|
Using Pkg.CandVersion() here is wrong as its implementation will return
a candidate based just on the default policy settings ignoring user
preferences and otherwise set candidates (aka: it sidesteps the
pkgDepCache).
This causes M-A:same libraries to be detected as screwed even through
they aren't, so that they end up being kept back.
Reported-By: Felipe Sateler on IRC
|