Age | Commit message (Collapse) | Author |
|
LP: #1812696
|
|
This new field allows a repository to declare that access to
packages requires authorization. The current implementation will
set the pin to -32768 if no authorization has been provided in
the auth.conf(.d) files.
This implementation is suboptimal in two aspects:
(1) A repository should behave more like NotSource repositories
(2) We only have the host name for the repository, we cannot use
paths yet.
- We can fix those after an ABI break.
The code also adds a check to acquire-item.cc to not use the
specified repository as a download source, mimicking NotSource.
(cherry picked from commit c2b9b0489538fed4770515bd8853a960b13a2618)
LP: #1814727
|
|
This allows disabling a repository by pinning it to 'never',
which is internally translated to a value of -32768 (or whatever
the minimum of short is).
This overrides any other pin for that repository. It can be used
to make sure certain sources are never used; for example, in
unattended-upgrades.
To prevent semantic changes to existing files, we substitute
min + 1 for every pin-priority: <min>. This is a temporary
solution, as we are waiting for an ABI break.
To add pins with that value, the special Pin-Priority
"never" may be used for now. It's unclear if that will
persist, or if the interface will change eventually.
(cherry picked from commit 8bb2a91a070170d7d8e71206d1c66a26809bdbc3)
LP: #1814727
|
|
While running our CI we noticed that sometimes we see an error
from the new json hooks code. The error message is:
```
E: Could not read response to hello message from hook [ ! -f /usr/bin/snap ] || /usr/bin/snap advise-snap --from-apt 2>/dev/null || true: Broken pipe
```
when purging the snapd package which provides the hook. This indicates
that we should probably also consider EPIPE not an error (just like
we do for ECONNRESET). This PR does exactly this.
(cherry picked from commit 6af48a7f83540c807be1d2777470d23e6b260eb8)
LP: #1814543
|
|
|
|
This fixes a security issue that can be exploited to inject arbritrary debs
or other files into a signed repository as followed:
(1) Server sends a redirect to somewhere%0a<headers for the apt method> (where %0a is
\n encoded)
(2) apt method decodes the redirect (because the method encodes the URLs before
sending them out), writting something like
somewhere\n
<headers>
into its output
(3) apt then uses the headers injected for validation purposes.
Regression-Of: c34ea12ad509cb34c954ed574a301c3cbede55ec
LP: #1812353
|
|
|
|
|
|
|
|
This is the cosmic branch.
|
|
This gives more protection for people where kernel metapackages
are accidentally removed.
LP: #1787460
(cherry picked from commit a4b0ce5a4f5068f780b3aa94473230b5093a837d)
|
|
This allows us to install matching auth files for sources.list.d
files, for example; very useful.
This converts aptmethod's authfd from one FileFd to a vector of
pointers to FileFd, as FileFd cannot be copied, and move operators
are hard.
(cherry picked from commit bbfcc05c1978decd28df9681fd73e2a7d9a8c2a5)
LP: #1811120
|
|
|
|
Pass -i to git log, so "Release foo" is detected as well, not just
"release foo", and also handle the rename of Git-Dch to Gbp-Dch.
|
|
Some post-invoke scripts install packages, which fails because
the environment variable is not set. This sets the variable for
all three kinds of scripts {pre,post-}invoke and pre-install-pkgs,
but we will only allow post-invoke at a later time.
Gbp-Dch: full
|
|
See merge request apt-team/apt!29
[jak@d.o: Also adjust translations, provide better subject]
|
|
Including a block-element like informalexample in a para is legal, but
the documentation of the para tag hints that some processing systems may
have difficulties handling this – so lets just move it out of the block
and be happy as it is (again?) displayed.
Closes: #909712
|
|
pkgCacheFile's destructor unlocks the system, which is confusing
if you did not open the cachefile with WithLock set. Create a private
data instance that holds the value of WithLock.
This regression was introduced in commit b2e465d6d32d2dc884f58b94acb7e35f671a87fe:
Join with aliencode
Author: jgg
Date: 2001-02-20 07:03:16 GMT
Join with aliencode
by replacing a "Lock" member that was only initialized when the lock
was taken by calls to Lock, UnLock; with the latter also taking place
if the former did not occur.
Regression-Of: b2e465d6d32d2dc884f58b94acb7e35f671a87fe
LP: #1794053
|
|
A recent change to use chronos inadvertently replaced the
difference of new usec - old usec with new sec - old usec,
which is obviously wrong.
|
|
|
|
The implementation of "apt-cache show" (not "apt show") incorrectly
resets the currently used parser if the record itself and the
description to show come from the same file (as it is the case if no
Translation-* files are available e.g. after debootstrap).
The code is more complex than you would hope to support some rather
unusual setups involving Descriptions and their translations as tested
for by ./test-bug-712435-missing-descriptions as otherwise this could
be a one-line change.
Regression-Of: bf53f39c9a0221b670ffff74053ed36fc502d5a0
Closes: #909155
|
|
|
|
http: Stop pipeline after close only if it was not filled before
See merge request apt-team/apt!25
|
|
It is perfectly valid behavior for a server to respond with
Connection: close eventually, even when pipelining. Turning
off pipelining due to that is wrong. For example, some Ubuntu
mirrors close the connection after 101 requests. If I have
more packages to install, only the first 101 would benefit
from pipelining.
This commit introduces a new check to only turn of pipelining
for future connections if the pipeline for this connection did
not have 3 successful fetches before, that should work quite well to
detect broken server/proxy combinations like in bug 832113.
|
|
Process all of --status-fd and don't expect duplicate status msg
See merge request apt-team/apt!26
|
|
The uniqueness in std::set containers is ensured by the ordering
operator we provide, but it was not considering that different versions
can have the same description like the different architectures for a
version of a package.
Closes: #908218
|
|
We are seeing 'processing' messages from dpkg first, so it makes sense
to translate them to "Preparing" messages instead of using "Installing"
and co to override these shortly after with the "Preparing" messages.
The difference isn't all to visible as later messages tend to linger far
longer in the display than the ealier ones, but at least in a listing it
seems more logical.
|
|
The progress reporting relies on parsing the status reports of
dpkg which used to repeat being in the same state multiple times
in the same run, but by fixing #365921 it will stop doing so.
The problem is in theory just with 'config-files' in case we do purge as
this (can) do remove + purge in one step, but we remove this also for
the unpack + configure combination althrough we handle these currently
in two independent dpkg calls.
|
|
Exiting the processing loop as soon as the dpkg process finishes might
leave status-fd lines unprocessed which wasn't much of a problem in the
past as the progress would just be slightly off, but now that we us the
information also for skipping already done tasks and generate warnings
if we didn't see all expected messages we should make sure we seem them
all. We still need to exit "early" if dpkg exited unsuccessfully/crashed
through as the (remaining) status lines we get could be incomplete.
|
|
It is an uphill battle to "reset" the environment to a clean state
without making it needlessly hard to use 'good' environment variables,
so we just try a little harder here without really trying for
completeness.
Gbp-Dch: Ignore
|
|
Reported-By: Guillem Jover <guillem@debian.org>
Gbp-Dch: Ignore
|
|
No user-visible change as it effects mostly code comments and
not a single error message, manpage or similar.
Reported-By: codespell & spellintian
Gbp-Dch: Ignore
|
|
cppcheck reports: (error) Iterator 't' used after element has been erased.
The loop is actually fashioned to deal with this (not in the most
efficient way, but in simplest and speed isn't really a concern here)
IF this codepath had a "break" at the end… so I added one.
Note that the tests aren't failing before (and hopefully after) the
change as the undefined behavior we encounter is too stable.
Thanks: David Binderman for reporting
|
|
Closes: #907481
|
|
|
|
APT in 1.6 saw me rewriting the mirror:// transport method, which works
comparable to the decommissioned httpredir.d.o "just" that apt requests
a mirror list and performs all the redirections internally with all the
bells like parallel download and automatic fallback (more details in the
apt-transport-mirror manpage included in the 1.6 release).
The automatic fallback is the problem here: The intend is that if a file
fails to be downloaded (e.g. because the mirror is offline, broken,
out-of-sync, …) instead of erroring out the next mirror in the list is
contacted for a retry of the download.
Internally the acquire process of an InRelease file (works with the
Release/Release.gpg pair, too) happens in steps: 1) download file and 2)
verify file, both handled as URL requests passed around. Due to an
oversight the fallbacks for the first step are still active for the
second step, so that the successful download from another mirror stands
in for the failed verification… *facepalm*
Note that the attacker can not judge by the request arriving for the
InRelease file if the user is using the mirror method or not. If entire
traffic is observed Eve might be able to observe the request for
a mirror list, but that might or might not be telling if following
requests for InRelease files will be based on that list or for another
sources.list entry not using mirror (Users have also the option to have
the mirror list locally (via e.g. mirror+file://) instead of on a remote
host). If the user isn't using mirror:// for this InRelease file apt
will fail very visibly as intended.
(The mirror list needs to include at least two mirrors and to work
reliably the attacker needs to be able to MITM all mirrors in the list.
For remotely accessed mirror lists this is no limitation as the attacker
is in full control of the file in that case)
Fixed by clearing the alternatives after a step completes (and moving a pimpl
class further to the top to make that valid compilable code). mirror://
is at the moment the only method using this code infrastructure (for all
others this set is already empty) and the only method-independent user
so far is the download of deb files, but those are downloaded and
verified in a single step; so there shouldn't be much opportunity for
regression here even through a central code area is changed.
Upgrade instructions: Given all apt-based frontends are affected, even
additional restrictions like signed-by are bypassed and the attack in
progress is hardly visible in the progress reporting of an update
operation (the InRelease file is marked "Ign", but no fallback to
"Release/Release.gpg" is happening) and leaves no trace (expect files
downloaded from the attackers repository of course) the best course of
action might be to change the sources.list to not use the mirror family
of transports ({tor+,…}mirror{,+{http{,s},file,…}}) until a fixed
version of the src:apt packages are installed.
Regression-Of: 355e1aceac1dd05c4c7daf3420b09bd860fd169d,
57fa854e4cdb060e87ca265abd5a83364f9fa681
LP: #1787752
|
|
|
|
gpgs DETAILS documentation file declares that GOODSIG could report keyid
or fingerprint since gpg2, but for the time being it is still keyid
only. Who knows if that will ever change as that feels like an interface
break with dangerous security implications, but lets be better safe than
sorry especially as the code dealing with signed-by keyids is prepared
for this already. This code is rewritten still to have them all use the
same code for this type of problem.
|
|
The 1.7 series rework of show started in
bf53f39c9a0221b670ffff74053ed36fc502d5a0 resolved the issue already,
but its always a good idea to at least bring the tests along so
that we hopeful do not regress in the future with another rewrite.
Tests: #905527
Gbp-Dch: Ignore
|
|
Reviewed-by: Mo Zhou <cdluminate@gmail.com>
Closes: #903695
|
|
If multiple threads act on requests (like if connection comes from a
webbrowser) a thread might request the supported compressors while
another thread is still working on creating the list to be stored in the
static cache variable.
As the price to pay for atomic and co seems to high for the fringe
usecase of manual usage of aptwebserver the patch just makes a call to
generate the list while still single threaded.
Gbp-Dch: Ignore
|
|
Completely pointless as it makes no difference for apt,
but copying the file to other projects becomes a lot easier.
Gbp-Dch: Ignore
|
|
We forgot to set the variable for the selection changes. Let's
set it for that and some other dpkg calls.
Regression-Of: c2c8b4787b0882234ba2772ec7513afbf97b563a
|
|
Add support for dpkg frontend lock
See merge request apt-team/apt!11
|
|
The dpkg frontend lock is a lock dpkg tries to acquire
except if the frontend already acquires it.
This fixes a race condition in the install command where the
dpkg lock is not held for a short period of time between
different dpkg invocations.
For this reason we also define an environment variable
DPKG_FRONTEND_LOCKED for dpkg invocations so dpkg knows
not to try to acquire the frontend lock because it's held
by a parent process.
We can set DPKG_FRONTEND_LOCKED only if the frontend lock
really is held; that is, if our lock count is greater than 0
- otherwise an apt client not using the LockInner family of
functions would run dpkg without the frontend lock set, but
with DPKG_FRONTEND_LOCKED set. Such a process has a weaker
guarantee: Because dpkg would not lock the frontend lock
either, the process is prone to the existing races, and,
more importantly, so is a new style process.
Closes: #869546
[fixups: fix error messages, add public IsLocked() method, and
make {Un,}LockInner return an error on !debSystem]
|
|
Add trailing newline to output of edit-sources.
See merge request apt-team/apt!22
|
|
|
|
The random_device fails if not enough entropy is available. We do
not need high-quality entropy here, though, so let's switch to a
seed based on the current time in nanoseconds, XORed with the PID.
|
|
Makes the console output cleaner.
|
|
Handle JSON hooks that just close the file/exit and fix some other errors
See merge request apt-team/apt!21
|