Age | Commit message (Collapse) | Author |
|
APT in 1.6 saw me rewriting the mirror:// transport method, which works
comparable to the decommissioned httpredir.d.o "just" that apt requests
a mirror list and performs all the redirections internally with all the
bells like parallel download and automatic fallback (more details in the
apt-transport-mirror manpage included in the 1.6 release).
The automatic fallback is the problem here: The intend is that if a file
fails to be downloaded (e.g. because the mirror is offline, broken,
out-of-sync, …) instead of erroring out the next mirror in the list is
contacted for a retry of the download.
Internally the acquire process of an InRelease file (works with the
Release/Release.gpg pair, too) happens in steps: 1) download file and 2)
verify file, both handled as URL requests passed around. Due to an
oversight the fallbacks for the first step are still active for the
second step, so that the successful download from another mirror stands
in for the failed verification… *facepalm*
Note that the attacker can not judge by the request arriving for the
InRelease file if the user is using the mirror method or not. If entire
traffic is observed Eve might be able to observe the request for
a mirror list, but that might or might not be telling if following
requests for InRelease files will be based on that list or for another
sources.list entry not using mirror (Users have also the option to have
the mirror list locally (via e.g. mirror+file://) instead of on a remote
host). If the user isn't using mirror:// for this InRelease file apt
will fail very visibly as intended.
(The mirror list needs to include at least two mirrors and to work
reliably the attacker needs to be able to MITM all mirrors in the list.
For remotely accessed mirror lists this is no limitation as the attacker
is in full control of the file in that case)
Fixed by clearing the alternatives after a step completes (and moving a pimpl
class further to the top to make that valid compilable code). mirror://
is at the moment the only method using this code infrastructure (for all
others this set is already empty) and the only method-independent user
so far is the download of deb files, but those are downloaded and
verified in a single step; so there shouldn't be much opportunity for
regression here even through a central code area is changed.
Upgrade instructions: Given all apt-based frontends are affected, even
additional restrictions like signed-by are bypassed and the attack in
progress is hardly visible in the progress reporting of an update
operation (the InRelease file is marked "Ign", but no fallback to
"Release/Release.gpg" is happening) and leaves no trace (expect files
downloaded from the attackers repository of course) the best course of
action might be to change the sources.list to not use the mirror family
of transports ({tor+,…}mirror{,+{http{,s},file,…}}) until a fixed
version of the src:apt packages are installed.
Regression-Of: 355e1aceac1dd05c4c7daf3420b09bd860fd169d,
57fa854e4cdb060e87ca265abd5a83364f9fa681
LP: #1787752
|
|
We forgot to set the variable for the selection changes. Let's
set it for that and some other dpkg calls.
Regression-Of: c2c8b4787b0882234ba2772ec7513afbf97b563a
|
|
The dpkg frontend lock is a lock dpkg tries to acquire
except if the frontend already acquires it.
This fixes a race condition in the install command where the
dpkg lock is not held for a short period of time between
different dpkg invocations.
For this reason we also define an environment variable
DPKG_FRONTEND_LOCKED for dpkg invocations so dpkg knows
not to try to acquire the frontend lock because it's held
by a parent process.
We can set DPKG_FRONTEND_LOCKED only if the frontend lock
really is held; that is, if our lock count is greater than 0
- otherwise an apt client not using the LockInner family of
functions would run dpkg without the frontend lock set, but
with DPKG_FRONTEND_LOCKED set. Such a process has a weaker
guarantee: Because dpkg would not lock the frontend lock
either, the process is prone to the existing races, and,
more importantly, so is a new style process.
Closes: #869546
[fixups: fix error messages, add public IsLocked() method, and
make {Un,}LockInner return an error on !debSystem]
|
|
The random_device fails if not enough entropy is available. We do
not need high-quality entropy here, though, so let's switch to a
seed based on the current time in nanoseconds, XORed with the PID.
|
|
debSystem uses a reference counted lock, so you can acquire it
multiple times in your applications, possibly nested. Nesting
locks causes a fd leak, though, as we only increment the lock
count when we already have locked twice, rather than once, and
hence when we call lock the second time, instead of increasing
the lock count, we open another lock fd.
This fixes the code to check if we have locked at all (> 0).
There is no practical problem here aside from the fd leak, as
closing the new fd releases the lock on the old one due to the
weird semantics of fcntl locks.
|
|
Clock changes while apt is running can result in strange reports
confusing (and amusing) users. Sadly, to keep the ABI for now the
code is a bit more ugly than it would need to be.
|
|
Commit d7c92411dc1f4c6be098d1425f9c1c075e0c2154 introduced a warning for
non-existent files from components not mentioned in Components to hint
users at a mispelling or the disappearance of a component.
The debian-installer subcomponent isn't actively advertised in the
Release file through, so if apt ends up in acquiring a file which
doesn't exist for this component (like Translation files) apt would
produce a warning:
W: Skipping acquire of configured file
'main/debian-installer/i18n/Translation-en' as repository
'http://deb.debian.org/debian buster InRelease' doesn't have the
component 'main/debian-installer' (component misspelt in sources.list?)
We prevent this in the future by checking if any file exists from this
component which results in the warning to be produced still for the
intended cases and silence it on the d-i case.
This could potentially cause the warning not to be produced in cases it
should be if some marginal file remains, but as this message is just a
hint and the setup a bit pathological lets ignore it for now.
There is also the possibility of having no file present as they would
all be 0-length files and being a "hidden" component, but that would be
easy to workaround from the repository side and isn't really actively used
at the moment in the wild.
Closes: #879591
|
|
more volatile: build-dep foo.deb/release & show foo.deb
See merge request apt-team/apt!14
|
|
Don't force the same mirror for by-hash URIs
See merge request apt-team/apt!15
|
|
When running with Debug::pkgAutoRemove=yes, explain why certain packages
are being marked, either because they're marked essential/important or
because they match the blacklist from APT::NeverAutoRemove.
This should help troubleshoot cases where autoremove is not proposing
removal of packages expected to be up for removal.
Tested manually with `apt-get autoremove -o Debug::pkgAutoRemove=yes`.
|
|
Downloading from the same mirror we got a Release file from makes sense
for non-unique URIs as their content changes between mirror states, but
if we ask for an index via by-hash we can be sure that we either get the
file we wanted or a 404 for which we can perform a fallback for which
allows us to pull indexes from different mirror in parallel.
|
|
Individual items shouldn't concern themselves with these alternative
locations, we can deal with this more efficiently within the
infrastructure created for other alternative URIs now avoiding the need
to implement this in each item.
|
|
If we got a file but it produced a hash error, mismatched size or
similar we shouldn't fallback to alternative URIs as they likely result
in the same error. If we can we should instead use another mirror.
We used to be a lot stricter by stopping all trys for this file if we
got a non-404 (or a hash-based) failure, but that is too hard as we
really want to try other mirrors (if we have them) in the hope that they
have the expected and correct files.
|
|
With the advent of compressed files and especially with in-memory
post-processed files the simple assumptions made in IsOk are no longer
true. Worse, they are at best duplicates of checks performed by the
cache generation (and validation) earlier and isn't used in too many
places. It is hence best to simply get right of these calls instead of
trying to fix them.
|
|
It is easier to prepend our fields, but that results in confusion for
things working on the so generated records as they don't start with the
usual "Package" – that shouldn't be a problem in theory, but in practice
e.g. "apt-cache show" shows these records directly to the user who
will probably be more confused by it than tools.
|
|
IP addresses are by definition not a domain so in the best case the
requests will just fail; we can do better than that on our own.
|
|
Prompted-by: Jakub Wilk <jwilk@debian.org>
|
|
Reported-By: codespell & spellintian
Gbp-Dch: Ignore
|
|
Reported-By: gcc -Wdouble-promotion
Gbp-Dch: Ignore
|
|
Only use zstd defined variables if zstd was found.
|
|
pu/zstd
See merge request apt-team/apt!8
|
|
This implements support for multi frame files while keeping
error checking for unexpected EOF working correctly. Files
with multiple frames are generated by pzstd, for example.
|
|
This is a simplified variant of the code for xz, adapted to support
multiple digit integers.
|
|
Reported-By: lintian spelling-error-in-manpage
|
|
Shipping 1.6 with major 12 would not allow us to update 1.5.y
in a different way than 1.6.y if we have to without resorting
to minor version hacks. Let's just bump the major instead.
|
|
We just enabled https on changelogs.ubuntu.com, let's use it.
|
|
zstd is a compression algorithm developed by facebook. At level 19,
it is about 6% worse in size than xz -6, but decompression is multiple
times faster, saving about 40% install time, especially with eatmydata
on cloud instances.
|
|
Check that Date of Release file is not in the future
See merge request apt-team/apt!3
|
|
By restricting the Date field to be in the past, an attacker cannot
just create a repository from the future that would be accepted as
a valid update for a repository.
This check can be disabled by Acquire::Check-Date set to false. This
will also disable Check-Valid-Until and any future date related checking,
if any - the option means: "my computers date cannot be trusted."
Modify the tests to allow repositories to be up to 10 hours in the
future, so we can keep using hours there to simulate time changes.
|
|
The interesting takeaway here is perhaps that 'chmod +w' is effected by
the umask – obvious in hindsight of course. The usual setup helps with
hiding that applying that recursively on all directories (and files)
isn't correct. Ensuring files will not be stored with the wrong
permissions even if in strange umask contexts is trivial in comparison.
Fixing the test also highlighted that it wasn't bulletproof as apt will
automatically fix the permissions of the directories it works with, so
for this test we actually need to introduce a shortcut in the code.
Reported-By: Ubuntu autopkgtest CI
|
|
This was broken by a refactoring in 1adcf56bec7d2127d83aa423916639740fe8e586
which iterated over getCompressorExtensions() instead of the compressors and
using their extension field. getCompressorExtensions() does not contain the
empty extension for uncompressed files, though, and hence this was broken.
LP: #1746807
|
|
apt 1.6~alpha6 introduced aux requests to revamp the implementation of
a-t-mirror. This already included the potential of running as non-root,
but the detection wasn't complete resulting in errors or could produce
spurious warnings along the way if the directory didn't exist yet.
References: ef9677831f62a1554a888ebc7b162517d7881116
Closes: 887624
|
|
Allow specifying an alternative path to the InRelease file, so
you can have multiple versions of a repository, for example.
Enabling this option disables fallback to Release and Release.gpg,
so setting it to InRelease can be used to ensure that only that
will be tried.
We add two test cases: One for checking that it works, and another
for checking that the fallback does not happen.
Closes: #886745
|
|
The summary line sounds a bit much: what we end up doing is just adding
two more guards before using results which should always be valid™.
That these values aren't valid is likely a bug in itself somewhere, but
that is no reason for crashing.
|
|
The appended "partial" should not be translated, but some translations
got this wrong and now that there is also "auxfiles" we can just fix
that problem by hiding these untranslatables from the translators.
Gbp-Dch: Ignore
|
|
Allowing a method to request work from other methods is a powerful
capability which could be misused or exploited, so to slightly limited
the surface let method opt-in into this capability on startup.
|
|
Embedding an entire acquire stack and HTTP logic in the mirror method
made it rather heavy weight and fragile. This reimplement goes the other
way by doing only the bare minimum in the method itself and instead
redirect the actual download of files to their proper methods.
The reimplementation drops the (in the real world) unused query-string
feature as it isn't really implementable in the new architecture.
|
|
If a method needs a file to operate like e.g. mirror needs to get a list
of mirrors before it can redirect the the actual requests to them. That
could easily be solved by moving the logic into libapt directly, but by
allowing a method to request other methods to do something we can keep
this logic contained in the method and allow e.g. also methods which
perform binary patching or similar things.
Previously they would need to implement their own acquire system inside
the existing one which in all likelyhood will not support the same
features and methods nor operate with similar security compared to what
we have already running 'above' the requesting method. That said, to
avoid methods producing conflicts with "proper" files we are downloading
a new directory is introduced to keep the auxiliary files in.
[The message magic number 351 is a tribute to the german Grundgesetz
article 35 paragraph 1 which defines that all authorities of the
state(s) help each other on request.]
|
|
The format isn't too hard to get right, but it gets funny with multiline
fields (which we don't really have yet) and its just easier to deal with
it once and for all which can be reused for more messages later.
|
|
Commit 89c4c588b275 ("fix from David Kalnischkies for the InRelease gpg
verification code (LP: #784473)") amended verification of cleartext
signatures by a check whether the file to be verified actually starts
with "-----BEGIN PGP SIGNATURE-----\n".
However cleartext signed InRelease files have been found in the wild
which use \r\n as line ending for this armor header line, presumably
generated by a Windows PGP client. Such files are incorrectly deemed
unsigned and result in the following (misleading) error:
Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)
RFC 4880 specifies in 6.2 Forming ASCII Armor:
That is to say, there is always a line ending preceding the
starting five dashes, and following the ending five dashes. The
header lines, therefore, MUST start at the beginning of a line, and
MUST NOT have text other than whitespace following them on the same
line.
RFC 4880 does not seem to specify whether LF or CRLF is used as line
ending for armor headers, but CR is generally considered whitespace
(e.g. "man perlrecharclass"), hence using CRLF is legal even under
the assumption that LF must be used.
SplitClearSignedFile() is stripping whitespace (including CR) on lineend
already before matching the string, so StartsWithGPGClearTextSignature() is
adapted to use the same ignoring. As the earlier method is responsible
for what apt will end up actually parsing nowadays as signed/unsigned this
change has no implications for security.
Thanks: Lukas Wunner for detailed report & initial patch!
References: 89c4c588b275d098af33f36eeddea6fd75068342
Closes: 884922
|
|
If the cache needs to grow to make room to insert volatile files like
deb files into the cache we were remapping null-pointers making them
non-null-pointers in the process causing trouble later on.
Only the current Releasefile pointer can currently legally be a
nullpointer as volatile files have no release file they belong to, but
for safety the pointer to the current Packages file is equally guarded.
The option APT::Cache-Start can be used to workaround this problem.
Reported-By: Mattia Rizzolo on IRC
|
|
Earlier gcc versions used to complain that you should add them althrough
there isn't a lot of point to it if you think about it, but now gcc (>= 8)
complains about the attribute being present.
warning: ‘pure’ attribute on function returning ‘void’ [-Wattributes]
Reported-By: gcc -Wattributes
Gbp-Dch: Ignore
|
|
For deb files we always supported falling back from one server to the
other if one failed to download the deb, but that was hardwired in the
handling of this specific item. Moving this alongside the retry
infrastructure we can implement it for all items and allow methods to
use this as well by providing additional URIs in a redirect.
|
|
We have quite a bit of metadata available for the files we acquire, but
the methods weren't told about it and got just the URI. That is indeed
fine for most, but to avoid methods trying to parse the metadata out of
the provided URIs (and fail horribly in edgecases) we can just as well
be nice and tell them stuff directly.
|
|
Moving the Retry-implementation from individual items to the worker
implementation not only gives every file retry capability instead of
just a selected few but also avoids needing to implement it in each item
(incorrectly).
|
|
LookupTag is a little helper to deal with rfc822-style strings we use in
apt e.g. to pass acquire messages around for cases in which our usual
rfc822 parser is too heavy. All the fields it had to deal with so far
were single line, but if they aren't it should really produce the right
output and not just return the first line. Error messages are a prime
candidate for becoming multiline as at the moment they are stripped of
potential newlines due to the previous insufficiency of LookupTag.
|
|
We have no speed problem with handling floats/doubles in our progress
handling, but that shouldn't prevent us from cleaning up the handling
slightly to avoid unclean casting to ints.
Reported-By: gcc -Wdouble-promotion -Wold-style-cast
|
|
The casts are useless, but the reports show some where we can actually
improve the code by replacing them with better alternatives like
converting whatever int type into a string instead of casting to a
specific one which might in the future be too small.
Reported-By: gcc -Wuseless-cast
|
|
gcc was warning about ignored type qualifiers for all of them due to the
last 'const', so dropping that and converting to static_cast in the
process removes the here harmless warning to avoid hidden real issues in
them later on.
Reported-By: gcc
Gbp-Dch: Ignore
|
|
gcc has problems understanding this construct and additionally thinks it
would produce multiple lines and stuff, so to keep using it isn't really
worth it for the few instances we have: We can just write the long form
there which works better.
Reported-By: gcc
Gbp-Dch: Ignore
|