Age | Commit message (Collapse) | Author |
|
The Date field in the Release file is useful to avoid allowing an
attacker to 'downgrade' a user to earlier Release files (and hence to
older states of the archieve with open security bugs). It is also needed
to allow a user to define min/max values for the validation of a Release
file (with or without the Release file providing a Valid-Until field).
APT wasn't formally requiring this field before through and (agrueable
not binding and still incomplete) online documentation declares it
optional (until now), so we downgrade the error to a warning for now to
give repository creators a bit more time to adapt – the bigger ones
should have a Date field for years already, so the effected group should
be small in any case.
It should be noted that earlier apt versions had this as an error
already, but only showed it if a Valid-Until field was present (or the
user tried to used the configuration items for min/max valid-until).
Closes: 809329
|
|
In 321213f0dcdcdaab04e01663e7a047b261400c9c Andreas Cadhalpun corrected
the incorrect overriding of earlier better-fitting results with later
(semi-)matches – but that broke the case in which packages are in multiple
releases in the same version (and the user has both releases configured).
Closes: 812497
|
|
In commit a221efc331693f8905da870141756c892911c433 I promoted the source
package name and version to the binary cache for faster access by e.g.
EDSP, but due to changing the interpretation length to soon we always
ignored the version part of the Source field, so that packages ended up
having the binary version as source version – which while usually just
fine it is wrong for binary rebuilds.
Closes: 812492
|
|
Git-Dch: Ignore
|
|
build-dep was implemented by parsing the build-dependencies of a package
and figuring out which packages to install/remove based on this. That
means that for the first level of dependencies build-dep was
implementing its very own resolver with all the benefits (aka: bugs)
this gives us for not using the existing resolver for all levels.
Making this work involves generating a dummy binary package with fitting
Depends and Conflicts and as we can't create them out of thin air the
cache generation needs to be involved so we end up writing a Packages
file which we want to parse – after we have parsed the other Packages
files already. With .dsc/.deb files we could add them before we started
parsing anything.
With a bit of care we can avoid generating too much data we have to
throw away again (as many parts assume that e.g. the count of packages
doesn't change midair), so that on a speed front there shouldn't be
much of a difference, but output can be slightly confusing as if we have
a completely valid cache on disk the "Reading package lists... Done" is
printed two times – but apt is pretty quick about it in that case.
Closes: #137560, #444930, #489911, #583914, #728317, #812173
|
|
To resolve dependencies like "pkg:arch" we create a package with the
name "pkg:arch" and the architecture "any". We create these packages
only if a dependency needs it as these kind of dependencies aren't that
common. This commit ensured that in the even this architecture specific
dependency is the only relation this package has we still create the
underlying package to have them available in provides resolution.
|
|
Introduced in 9d2a8a7388cf3b0bbbe92f6b0b30a533e1167f40 apt tries to
merge actions like downloading the same (as judged by hashes) file
into doing it once. The implementation was very simple in that it isn't
planing at all. Turns out that it works 90% of the time just fine, but
has issues in more complicated situations in which items can be in
different stages downloading different files emitting potentially the
"wrong" hash – like while pdiffs are worked on we might end up copying
the patch instead of the result file giving us very strange errors in
return. Reverting the change until we can implement a better planing
solution seems to be the best course of action even if its sad.
Closes: 810046
|
|
Architectures for packages which do not belong to the native nor a
foreign architecture (dubbed barbarian for now) which are marked
M-A:foreign still provide in their own architecture even if not for
others. Also, other M-A:foreign (and allowed) packages provide in these
barbarian architectures.
|
|
Closes: #734922
|
|
This prevents a test suite failure on systems with weird umasks.
Also set umask 000 at the beginning so we can actually check for
that anywhere.
Gbp-Dch: ignore
|
|
Just enabling it for anyone breaks with HTTP/1.0 servers and
proxies sometimes.
Closes: #810796
|
|
The code already deals with compressed leftovers, but forgot the
uncompressed files. The opertunity is picked to reorder this code and
add debug messages about the actions taken as well as produce such a
leftover file in the associated testcase.
|
|
With the addition of the $HASH-Download field in the .diff/Index we got
the size of the compressed patches for 'free', so if that information is
available we can use it for a more fitting calculation of the size
requirements of the patches vs. the complete file.
Note that this predicts a too small size in the transition case in which
the information isn't available for all patches, but figuring this out
would be a lot of code for practically nothing as only one update can
ever be in such a transition phase.
|
|
Some (older) versions of bash seem to be allergic to a method named
"aptautotest_grep_^apt" (note the caret). Unlikely that we are going to
write autotests for such commands so we could just skip those, but lets
instead just use "normal" characters in the names and strip the rest as
we already did with the (arguable more common) '-'.
|
|
This way it works more similar to the compressor binaries, which we
can relief in this way from their job in the test framework avoiding the
need of adding e.g. liblz4-tool to the test dependencies.
|
|
Downloading and storing are two different operations were different
compression types can be preferred. For downloading we provide the
choice via Acquire::CompressionTypes::Order as there is a choice to
be made between download size and speed – and limited by whats available
in the repository.
Storage on the other hand has all compressions currently supported by
apt available and to reduce runtime of tools accessing these files the
compression type should be a low-cost format in terms of decompression.
apt traditionally stores its indexes uncompressed on disk, but has
options to keep them compressed. Now that apt downloads additional files
we also deal with files which simply can't be stored uncompressed as
they are just too big (like Contents for apt-file). Traditionally they
are downloaded in a low-cost format (gz) as repositories do not provide
other formats, but there might be even lower-cost formats and for
download we could introduce higher-cost in the repositories.
Downloading an entire index potentially requires recompression to
another format, so an update takes potentially longer – but big files
are usually updated via pdiffs which has to de- and re-compress anyhow
and does it on the fly anyhow, so there is no extra time needed and in
general it seems to be benefitial to invest the time in update to save
time later on file access.
|
|
Less hardcoding should help while introducing new compressors.
Git-Dch: Ignore
|
|
There is no reason to enforce that the file we start the bootstrap with
is compressed with a compressor which is available online. This allows
us to change the on-disk format as well as deals with repositories
adding/removing support for a specific compressor.
|
|
Adding a new compressor method meant adding a new method as well – even
if that boilt down to just linking to our generalized decompressor with
a new name. That is unneeded busywork if we can instead just call the
generalized decompressor and let it figure out which compressor to use
based on the filenames rather than by program name.
For compatibility we ship still 'gzip', 'bzip2' and co, but they are
just links to our "new" 'store' method.
|
|
Gbp-Dch: ignore
|
|
This option controls if downloaded packages should be kept after
a successful install or if they should be deleted. The default
for "apt-get" is that they are kept (just like before).
However the default for "apt" is that they get deleted.
Closes: #160743
|
|
apt_preferences and deb822-style sources used the specialized class
pkgUserTagSection to deal with comments before/after a given stanza, but
it couldn't deal with comments in the stanza at all.
codesearch suggests that nobody else does and a vastely superior way of
working with potentially commented files is implemented now, so we can
officially discourage the use of the old incomplete hack class.
|
|
Now (55153bf94ff28a23318e79aa48242244c4d82b3c) that pkgTagFile can be
told to deal with all sorts of comments we can use this mode to parse
dsc (as by catch) and debian/control files properly even in the wake of
multiline fields spliced with comments like Build-Depends.
Closes: 806775
|
|
This test relies on the ordering of the hash function.
|
|
Debian has a Packages file for arch:all already, but the arch:any files
contain arch:all packages as well, so downloading it would be a total
waste of resources. Getting this solved is on the list of things to do,
but it is also the hardest part – for index targets like Contents the
situation is much easier and less server/client implementations are
involved so we might not want to stall them.
A repository can now declare via:
No-Support-for-Architecture-all: Packages
that even if an arch:all Packages exists, it shouldn't be downloaded, so
that support for Contents files can be added now.
See also 1dd20368486820efb6ef4476ad739e967174bec4 for the implementation
of downloading arch:all index targets, which this is limiting.
The field uses the name of the target from the apt configuration for
simplicity and is negative by design as this field is intended to be
supported/needed only for a "short" time (one or two Debian releases).
While this commit theoretically supports any target, its expected to
only see "Packages" as a value in reality.
|
|
We try to acquired the locks, but we didn't stop if we failed to get it…
Closes: 808561
|
|
The output changes slightly between different versions, which we already
dealt with in the main testcase for apt-key, but there are two more
which do not test both versions explicitly and so still had gpg1 output
to check against as this is the default at the moment.
Git-Dch: Ignore
|
|
apt-key creates internally a script (since ~1.1) which it will call to
avoid dealing with an array of different options in the code itself, but
while writing this script it wraps the values in "", which will cause
the shell to evaluate its content upon execution.
To make 'use' of this either set a absolute gpg command or TMPDIR to
something as interesting as:
"/tmp/This is fü\$\$ing cràzy, \$(man man | head -n1 | cut -d' ' -f1)\$!"
If such paths can be encountered in reality is a different question…
|
|
This doesn't allow all tests to run cleanly, but it at least allows to
write tests which could run successfully in such environments.
Git-Dch: Ignore
|
|
This filters out errors due to timing issues. Early exits if
enough pulses occured.
|
|
This helps writing test cases. Also adapt the test case that
expected 64-bit.
Nothing changes performance wise, the distribution of the hash
values remains intact.
|
|
This makes the test suite work on 32 bit-long platforms.
Gbp-Dch: ignore
|
|
Use asprintf() so we have easy error detection and do not depend
on PATH_MAX.
Do not add another separator to the generated path, in both cases
the path inside the chroot is guaranteed to have a leading /
already.
Also pass -Wall to gcc.
|
|
This caused test-bug-717891-abolute-uris-for-proxies to fail
Gbp-Dch: ignore
|
|
This breaks a lot of test cases
Gbp-Dch: ignore
|
|
The allocated buffer was one byte too small. Allocate a buffer
of PATH_MAX instead and use snprintf(), as suggested by Martin
Pitt.
|
|
This reduces the chance that the test fails.
Gbp-Dch: ignore
|
|
Instead of checking for [10%, 100%), check for (0%, 100%),
that is everything < 100% and >0%.
Gbp-Dch: ignore
|
|
This should make the test work on non-amd64 systems
Gbp-Dch: ignore
|
|
Trying to clean up directories which do not exist seems rather silly if
you think about it, so let apt think about it and stop it.
Depends a bit on the caller if this is fixing anything for them as they
might try to acquire a lock or doing other clever things as apt does.
Closes: 807477
|
|
Regression of 1e064088bf7b3e29cd36d30760fb3e4143a1a49a (1.1~exp4) which
moved code around and renamed methods heavily ending up calling the
wrong method matching packagenames only instead of calling the full
array. Most commands work with versions, so this managed to fly under
the radar for quite a while.
Closes: 807870
|
|
If we can't work with the hashes we parsed from the Release file we
display now an error message if the Release file includes only weak
hashes instead of downloading the indexes and failing to verify them
with "Hash Sum mismatch" even through the hashes didn't mismatch (they
were just weak).
If for some (unlikely) reason we have got weak hashes only for
individual targets we will show a warning to this effect (again, befor
downloading and failing the index itself).
Closes: 806459
|
|
dpkg does that when reading package files, so we should do
the same. This only deals with parsing names from binary
package paragraphs, it does not look at source package names
and/or the list of binaries in a dsc file.
Closes: #807012
|
|
Gbp-Dch: ignore
|
|
After e75e5879 the reason for an implicit dependency on debianutils
(which is essential for debian, but likely not on other systems) was
just two uses of run-parts, which can be replaced with the a lot more
portable find-piped-into-sort duo.
|
|
which is a debian specific tool packaged in debianutils (essential)
while command is a shell builtin defined by POSIX.
Closes: 807144
Thanks: Mingye Wang for the suggestion.
|
|
This should make it more obvious that CHANGEPATH is a placeholder which
apt will replace with a package specific path rather than a string
constant.
Mail-Reference: <87d1upgvaf.fsf@deep-thought.43-1.org>
Mail-Archive: https://lists.debian.org/debian-dak/2015/12/msg00005.html
|
|
'Regression' of 7d19ee92f2368a40e739cb27d22d6d28f37ebf45, just that it
now works more as expected than previously. Of course, build-essentials
are implicitly also build dependencies, so by definition all packages
have build dependencies, but that isn't what this message wants to say
and it isn't what the user expects.
Git-Dch: Ignore
|
|
Otherwise a user is subject to unexpected content-injection depending on
which directory she happens to start apt in. This also cleans up the code
requiring less implementation details in build-dep which is always good.
Technically, this is an ABI break as we override virtual methods, but
that they weren't overridden was a mistake resulting in pure classes,
which shouldn't be pure, so they were unusable – and as they are new in
1.1 nobody is using them yet (and hopefully ever as they are borderline
implementation details).
Closes: 806693
|
|
Regression of 14341a7ee1ca3dbcdcdbe10ad19b947ce23d972d.
Reported-By: Julian Andres Klode <jak@debian.org>
|