Age | Commit message (Collapse) | Author |
|
If the sources file we want to edit doesn't exist yet GetLock will
create it with 640, which for a generic lockfile might be okay, but as
this is a sources file more relaxed permissions are in order – and
actually required as it wont be readable for unprivileged users causing
warnings/errors in apt calls.
Reported-By: J. Theede (musca) on IRC
|
|
The tests changes the sources.list and the modification time of this
file is considered while figuring out if the cache can be good. Usually
this isn't an issue, but in that case we have the cache generation
produce warnings which appear twice in this case.
Gbp-Dch: Ignore
|
|
The existing cleanup was happening only for packages which had a status
change (install -> uninstalled) which is the most frequent but no the
only case – you can e.g. set autobits explicitly with apt-mark.
This would leave stanzas in the states file declaring a package to be
manually installed – which is the default value for a package not listed
at all, so we can just as well drop it from the file.
|
|
Otherwise calls like "apt -q install" end up calling "aptautotest_apt_q"
instead of "aptautotest_apt_install"
Gbp-Dch: Ignore
|
|
We support installing ./foo.deb (and ./foo.dsc for source) for a while
now, but it can be a bit clunky to work with those directly if you e.g.
build packages locally in a 'central' build-area.
The changes files also include hashsums and can be signed, so this can
also be considered an enhancement in terms of security as a user "just"
has to verify the signature on the changes file then rather than
checking all deb files individually in these manual installation
procedures.
|
|
If a user explicitly requests the download of arch:all apt shouldn't get
in the way and perform its detection dance if arch:all packages are
(also) in arch:any files or not.
This e.g. allows setting arch=all on a source with such a field (or one
which doesn't support all at all, but has the arch:all files like Debian
itself ATM) to get only the arch:all packages from there instead of
behaving like a no-op.
Reported-By: Helmut Grohne on IRC
|
|
Theoretically it should be enough to change the Dir setting and have apt
pick the dpkg/status file from that. Also, it should be consistently
effected by RootDir. Both wasn't really the case through, so a user had
to explicitly set it too (or ignore it and have or not have expected
sideeffects caused by it).
This commit tries to guess better the location of the dpkg/status file
by setting dir::state::status to a naive "../dpkg/status", just that
this setting would be interpreted as relative to the CWD and not
relative to the dir::state directory. Also, the status file isn't really
relative to the state files apt has in /var/lib/apt/ as evident if we
consider that apt/ could be a symlink to someplace else and "../dpkg"
not effected by it, so what we do here is an explicit replace on apt/
– similar to how we create directories if it ends in apt/ – with dpkg/.
As this is a change it has the potential to cause regressions in so far
as the dpkg/status file of the "host" system is no longer used if you
set a "chroot" system via the Dir setting – but that tends to be
intended and causes people to painfully figure out that they had to set
this explicitly before, so that it now works more in terms of how the
other Dir settings work (aka "as expected"). If using the host status
file is really intended it is in fact easier to set this explicitely
compared to setting the new "magic" location explicitely.
|
|
Most tests are either multiarch, do not care for the specific
architecture or do not interact with dpkg, so really effect by this is
only test-external-installation-planner-protocol, but its a general
issue that while APT can be told to treat any architecture as native
dpkg has the native architecture hardcoded so if we run tests we must
make sure that dpkg knows about the architecture we will treat as
"native" in apt as otherwise dpkg will refuse to install packages from
such an architecture.
This reverts f883d2c3675eae2700e4cd1532c1a236cae69a4e as it complicates
the test slightly for no practical gain after the generic fix.
|
|
Hardcoding amd64 broke the tests.
|
|
The setup didn't prepare the directories as expected by newer version of
tthe external tests in an autopkgtests environment.
|
|
Escape "+" in kernel package names when generating APT::NeverAutoRemove
list so it is not treated as a regular expression meta-character.
[Changed by David Kalnischkies: let test actually test the change]
Closes: #830159
|
|
If we have files in partial/ from a previous invocation or similar such
those could be symlinks created by file:// sources. The code is
expecting only real files through and happily changes owner,
modification times and permission on the file the symlink points to
which tend to be files we have no business in touching in this way.
Permissions of symlinks shouldn't be changed, changing owner is usually
pointless to, but just to be sure we pick the easy way out and use
lchown, check for symlinks before chmod/utimes.
Reported-By: Mattia Rizzolo on IRC
|
|
The test makes heavy use of disabling compression types which are
usually available some way or another like xz which is how the EIPP
logs are compressed by default. Instead of changing this test to change
the filename according to the compression we want to test we just
disable EIPP logging for this test as that is easier and has the same
practical effect.
Gbp-Dch: Ignore
|
|
It can be handy to set apt options for the testcases which shouldn't be
accidentally committed like external planner testing or workarounds for
local setups.
Gbp-Dch: Ignore
|
|
Gbp-Dch: ignore
|
|
This caused a crash because the cache was a nullptr.
Closes: #829651
|
|
All apt versions support numeric as well as 3-character timezones just
fine and its actually hard to write code which doesn't "accidently"
accepts it. So why change? Documenting the Date/Valid-Until fields in
the Release file is easy to do in terms of referencing the
datetime format used e.g. in the Debian changelogs (policy §4.4). This
format specifies only the numeric timezones through, not the nowadays
obsolete 3-character ones, so in the interest of least surprise we should
use the same format even through it carries a small risk of regression
in other clients (which encounter repositories created with
apt-ftparchive).
In case it is really regressing in practice, the hidden option
-o APT::FTPArchive::Release::NumericTimezone=0
can be used to go back to good old UTC as timezone.
The EDSP and EIPP protocols use this 'new' format, the text interface
used to communicate with the acquire methods does not for compatibility
reasons even if none of our methods would be effected and I doubt any
other would (in these instances the timezone is 'GMT' as that is what
HTTP/1.1 requires). Note that this is only true for apt talking to
methods, (libapt-based) methods talking to apt will respond with the
'new' format. It is therefore strongly adviced to support both also in
method input.
|
|
apt-key needs gnupg for most of its operations, but depending on it
isn't very efficient as apt-key is hardly used by users – and scripts
shouldn't use it to begin with as it is just a silly wrapper. To draw
more attention on the fact that e.g. 'apt-key add' should not be used in
favor of "just" dropping a keyring file into the trusted.gpg.d
directory this commit implements the display of warnings.
|
|
As the volatile sources are parsed last they were sorted behind the
dpkg/status file and hence are treated as a downgrade, which isn't
really what you want to happen as from a user POV its an upgrade.
|
|
If we have a (e.g. locally built) deb file installed and do try to
install it again apt complained about this being a downgrade, but it
wasn't as it is the very same version… it was just confused into not
merging the versions together which looks like a downgrade then.
The same size assumption is usually good, but given that volatile files
are parsed last (even after the status file) the base assumption no
longer holds, but is easy to adept without actually changing anything in
practice.
|
|
Traditionally all providers are protected providing something as apt
can't know which of them is actually really providing the functionality
for the user ensuring that we don't propose the removal of used stuff,
but that is of course also keeping stuff around which could be removed.
That can cause the collection of multiple old providers until the
provided package is itself no longer needed (e.g. out-of-tree kernel
modules). We combat this by marking providers only from the newest
source package version so that old providers built by older versions of
the same source package can be garbage collected.
|
|
Gbp-Dch: Ignore
|
|
Julian noticed on IRC that I fall victim to a lovely false friend by
calling referring to a 'planer' all the time even through these are
machines to e.g. remove splinters from woodwork ("make stuff plane").
The term I meant is written in german in this way (= with a single n)
but in english there are two, aka: 'planner'.
As that is unreleased code switching all instances without any
transitional provisions. Also the reason why its skipped in changelog.
Thanks: Julian Andres Klode
Gbp-Dch: Ignore
|
|
In 385d9f2f23057bc5808b5e013e77ba16d1c94da4 I implemented the storage of
scenario files based on enabling this by default for EIPP, but I
implemented it first optionally for EDSP to have it independent.
The reasons mentioned in the earlier commit (debugging and bugreports)
obviously apply here, especially as EIPP solutions aren't user approved,
nearly impossible to verify before starting the execution and at the
time of error the scenario has changed already, so that reproducing the
issue becomes hard(er).
|
|
Git-Dch: Ignore
|
|
The generation of the EIPP request was a bit to strict not generation
what would actually be needed to be part of the scenario.
|
|
Testing the current implementation can benefit from being able to be
feed an EIPP request and produce a fully compliant response. It is also
a great test for EIPP in general.
|
|
We can trim generation time and size of the EIPP scenario considerable
if we we avoid telling the planers about "uninteresting" packages.
This is one of the simpler but already very effective reductions:
Do not tell planers about versions which are neither installed nor are
to be installed as they have no effect on the plan we don't need to tell
the planer about them. EDSP solvers need to know about all versions for
better choice and error messages, but planers really don't.
Git-Dch: Ignore
|
|
The very first step in introducing the "external installation planer
protocol" (short: EIPP) as part of my GSoC2016 project.
The description reads: APT-based tools like apt-get, aptitude, synaptic,
… work with the user to figure out how their system should look like
after they are done installing/removing packages and their dependencies.
The actual installation/removal of packages is done by dpkg with the
constrain that dependencies must be fulfilled at any point in time (e.g.
to run maintainer scripts).
Historically APT has a super micro-management approach to this task
which hasn't aged that well over the years mostly ignoring changes in
dpkg and growing into an unmaintainable mess hardly anyone can debug and
everyone fears to touch – especially as more and more requirements are
tacked onto it like handling cycles and triggers, dealing with
"important" packages first, package sources on removable media, touch
minimal groups to be able to interrupt the process if needed (e.g.
unattended-upgrades) which not only sky-rocket complexity but also can
be mutually exclusive as you e.g. can't have minimal groups and minimal
trigger executions at the same time.
|
|
Git-Dch: Ignore
|
|
Weak had no dedicated option before and Insecure and Downgrade were both
global options, which given the effect they all have on security is
rather bad. Setting them for individual repositories only isn't great
but at least slightly better and also more consistent with other
settings for repositories.
|
|
For "Hash Sum mismatch" that info doesn't make a whole lot of
difference, but for the new insufficient info message an indicator that
while this hashes are there and even match, they aren't enough from a
security standpoint.
|
|
Downloading and saying "Hash Sum mismatch" isn't very friendly from a
user POV, so with this change we try to detect such cases early on and
report it, preferably before download even started.
Closes: 827758
|
|
Handling the extra check (and force requirement) for downgrades in
security in our AllowInsecureRepositories checker helps in having this
check everywhere instead of just in the most common place and requiring
a little extra force in such cases is always good.
|
|
APT can be forced to deal with repositories which have no security
features whatsoever, so just giving up on repositories which "just" fail
our current criteria of good security features is the wrong incentive.
Of course, repositories are better of fixing their setup to provide the
minimum of security features, but sometimes this isn't possible:
Historic repositories for example which do not change (anymore).
That also fixes problem with repositories which are marked as trusted,
but are providing only weak security features which would fail the
parsing of the Release file.
Closes: 827364
|
|
Unsecure repositories result in error messages by default which causes
the acquire run to fail hard, but non-failing repositories are still
updated just like in the slightly less hard-failures which got this
behaviour in 35664152e47a1d4d712fd52e0f0a2dc8ed359d32.
|
|
There is a subtile difference between an empty setting and "DIRECT" in
the configuration as the later overrides the generic settings while the
earlier does not. Also, non-zero exitcodes should really be reported as
an error rather than silently discarded.
|
|
Regression introduced in 8f858d560e3b7b475c623c4e242d1edce246025a.
Commands are probably better of always having output through as the
fall through to the generic proxy settings is likely not intended. As
documenting and implementing this more consistently is kind of a
regression through, it is split off into the next commit.
Closes: 827713
|
|
Merging by URI means that having sources lines with different URI
methods results in 'strange' warning and error messages, which aren't
very friendly from a user point of view as not encoding the method in
the filename is effectivly an implementation detail.
Merging by filename removes these messages and makes everything "work"
even if it isn't working the way it is configured as the indexes aren't
acquired over the method given, but over the first method for this
release file (which argueably is an implementation detail stemming from
the filename encoding, too).
So either direction isn't perfectly "right", but personally I prefer
"magic" over strange error messages (and doing a full-circle detection
of this with its own messages which would need to be translated feels
like way too much effort for dubious gain).
Closes: 826944
|
|
Seen first in #826783, but as this buglog also shows leaked uncompressed
files as well we don't close it just yet.
|
|
This effects only compressors configured on the fly (rather then the
inbuilt ones as they use a library).
|
|
We end our operation by calling "dpkg --configure -a", so instead of
running a (big) configure run with all packages mentioned explicitly
before this, we simply skip them and let them be handled by this call
implicitly.
There isn't really an observeable gain to be had here from a speed
point, but it helps in avoiding an (uncommon) problem of having a too
long commandline passed to dpkg, which we would split up (probably
incorrectly).
|
|
Most (if not all) solvers should be able to run perfectly fine without
root privileges as they get the entire state they are supposed to work
on via stdin and do not perform any action directly, but just pass
suggestions on via stdout.
The new default is to run them all as _apt hence, but each solver can
configure another user if it chooses/must. The security benefits are
minimal at best, but it helps preventing silly mistakes (see
35f3ed061f10a25a3fb28bc988fddbb976344c4d) and that is always good.
Note that our 'apt' and 'dump' solver already dropped privileges if they
had them.
|
|
For bugreports and co it could be handy to have the scenario and all the
settings used in it around later for inspection for EDSP like protocols.
EDSP might not be the most interesting as the user can still interrupt
the process before the solution is applied and users tend to have an
opinion on the "rightness" of a solution, so it is disabled by default.
|
|
Currently an EDSP solver gets send basically all versions which means
the absolute count is the same, but that might not be true forever (and
with the skipping of rc-only versions it kinda is already) and even if
it were true, segfaulting on bad input seems wrong.
|
|
First seen on hurd, but easily reproducible on all systems by removing
the 'execution' bit from the current working directory and watching some
tests (mostly the no-output expecting tests) fail due to find printing:
"find: Failed to restore initial working directory: …"
Samuel Thibault says in the bugreport:
| To do its work, find first records the $PWD, then goes to
| /etc/apt/trusted.gpg.d/ to find the files, and then goes back to $PWD.
|
| On Linux, getting $PWD from the 700 directory happens to work by luck
| (POSIX says that getcwd can return [EACCES]: Search permission was denied
| for the current directory, or read or search permission was denied for a
| directory above the current directory in the file hierarchy). And going
| back to $PWD fails, and thus find returns 1, but at least it emitted its
| output.
|
| On Hurd, getting $PWD from the 700 directory fails, and find thus aborts
| immediately, without emitting any output, and thus no keyring is found.
|
| So, to summarize, the issue is that since apt-get update runs find as a
| non-root user, running it from a 700 directory breaks find.
Solved as suggested by changing to '/' before running find, with some
paranoia extra care taking to ensure the paths we give to find are really
absolute paths first (they really should, but TMPDIR=. or a similar
Dir::Etc::trustedparts setting could exist somewhere in the wild).
The commit takes also the opportunity to make these lines slightly less
error ignoring and the two find calls using (mostly) the same parameters.
Thanks: Samuel Thibault for 'finding' the culprit!
Closes: 826043
|
|
In 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf changing to usage of C++ way
of setting the locale causes us to be terminated in case of usage of an
ungenerated locale as LC_ALL (or similar) – but we don't want to fail
here, we just want to carry on as before with setlocale which we call in
that case just for good measure.
|
|
HTTP/1.1 hardcodes GMT (RFC 7231 §7.1.1.1) and what is good enough for the
internet must be good enough for us™ as we reuse the implementation
internally to parse (most) dates we encounter in various places like the
Release files with their Date and Valid-Until header fields.
Implementing a fully timezone aware parser just feels too hard for no
effective benefit as it would take 5+ years (= until LTS's are out of
fashion) until a repository could use non-UTC dates and expect it to
work. Not counting non-apt implementations which might or might not
only want to encounter UTC here as well.
As a bonus, this eliminates the use of an instance of setlocale in
libapt.
Closes: 819697
|
|
Setting the C++ locale via std::locale::global(std::locale("")); which
would otherwise default to the default C locale (aka: unaffected by
setlocale) effects the formatting of numeric types in IO streams, which
for output for humans is perfectly sensible, but breaks our many text
interfaces used and parsed by us and others without expecting the
numbers to be formatted.
Closes: #825396
|
|
No real code change, just moving code around heavily to decouple the
EDSP specific parts from those we can reuse for EDSP-like protocols.
Git-Dch: Ignore
|