Age | Commit message (Collapse) | Author |
|
The epoch stripping in this code is done since day one, but in other
places we show a version epochs are not stripped. If epochs are present
in packages they tend to be an important information which we can't just
drop and especially can't drop "sometimes" as that confuses users and
tools alike – so even if removing code in use for (close to) 18 years
feels wrong, it is probably the right choice for consistency.
Closes: 818162
|
|
This makes it easier to understand what really is an error
and what not.
|
|
For the non-pdiff case, we have can have accurate progress
reporting because after fetching the {,In}Release files we know
how many IndexFiles will be fetched and what size they have.
Therefore init the filesize early (in pkgAcqIndex::Init) and
ensure that in Acquire::Pulse() looks at already downloaded
bits when calculating the progress in Acquire::Pulse.
Also improve debug output of Debug::acquire::progress
|
|
Git-Dch: Ignore
|
|
The problemresolver will set the candidate version for pkg P back
to the current version if it encounters an impossible to satisfy
critical dependency on P. However it did not set the State of
the package back as well which lead to a situation where P is
neither in Keep,Install,Upgrade,Delete state.
Note that this can not be tested via the traditional sh based
framework. I added a python-apt based test for this.
LP: #1550741
[jak@debian.org: Make the test not fail if apt_pkg cannot be
imported]
|
|
This was wrong and caused some issues because apt-key invoked
host apt-config with our library.
Gbp-Dch: ignore
|
|
This makes the test suite safe if we ever need to reject SHA1
signatures in an update.
|
|
The structure we parse the data into has a dedicated size field, but it
tends to be easier to handle it as a (very weak) checksum.
|
|
The URI descibing an item can change via mirrors/redirectors which
causes the .diff/Index files to get the wrong names in storage.
Git-Dch: Ignore
|
|
All other interactions with std::cout are flushed directly, just in the
stop case we hadn't done it – no problem expect if there is still output
coming after apt is done like in the case of a post-invoke script
producing output.
Closes: 793672
|
|
Now that we ignore SHA1-only files it makes sense to require also the
provision of hashes for the compressed patches as this was introduced in
the same patchset as support for non-SHA1 hashes in the file itself in
dak and adding support in other archive creators (if they support pdiffs
at all) will likely be in the same batch.
The reason for the change itself is simple: If you are 'scared' enough
about the security of SHA1, you shouldn't uncompress a file you haven't
verified at all – after all, it could be exploiting a bug or a zip bomb.
|
|
Given that we refuse to use SHA1-only .diff/Indexes no point in shipping
and running code which pretends to check support for it which given that
all these tests are run 3 times eats a noticeable amount of time.
Git-Dch: Ignore
|
|
Ensure that .diff/Index files that only contain SHA1 values and no
SHA2 values are not used.
|
|
SHA1 is not reasonably secure anymore, so we should not consider it
usable anymore. The test suite is adjusted to account for this.
|
|
Using amd64 broke the test case on non-amd64 architectures. Query
the native architecture from dpkg and use that instead.
The definition of NATIVE is copied from the test
test-architecture-specification-parsing.
|
|
This effectively merges branch 'typofixes-vlajos-20150807' of github.com:vlajos/apt
with the following commit:
commit 13cacb3e2e2352ba701e769fc889e3344fabbf7e
Author: Veres Lajos <vlajos@gmail.com>
Date: Sun Aug 9 00:12:53 2015 +0100
typofix - https://github.com/vlajos/misspell_fixer
It has been rebased for a better commit message.
|
|
If a single pdiff fails, we have to fail the entire patching endeavour
and fall back to getting the complete file instead. That is easy in
serverside merged pdiffs as we get them one by one. For clientside we
get them all at once through, which means that a failure in one has to
stop the entire pipeline, which works as expected (as proven by the
bugreporters as they don't even notice it happening). The problem is
just that the first failing pdiff will do the cleanup, so another pdiff
which happens to be successfully acquired after we processed the failure
doesn't find the file it is supposed to use as a basename anymore, so
the patch is renamed to what should be the unique extension and moved
into the current working directory. Processing is then stopped as the
patch realizes that it isn't the last one which completed downloading.
On the plus side this means this is neither us using a bad temporary
location nor a security problem. It "just" overrides unconditionally
files in your current working directory (if you happen to have them
named like a pdiff patch – a bit unlikely perhaps) and so drops files
there which are never used again.
I guess this was introduced in 4e3c5633b1e74b4f58b95f339cfbbf4cbf21ab3e
for real as I made the need for the existence of the base file rather
explicit, but the potential lingers in the code for far longer.
Closes: #816837
|
|
Fixed in f7bd44bae0d7cb7f9838490b5eece075da83899e already, but the
commit misses the Closes tag and while we are at it we can add a simple
regression test and micro-optimize it a bit.
Thanks: James McCoy for the suggestion.
Closes: 816691
|
|
In a249b3e6fd798935a02b769149c9791a6fa6ef16 I dropped with the manual
first resolver step also the support for installing build-deps as
automatic in such a way that it behaved like this option was enabled by
default.
Restoring support for it means that we go back to mark build-
dependencies as manually installed again by default and provide this
option to keep them as automatically installed.
|
|
This way we hopefully notice (new) warnings in this little helper.
Git-Dch: Ignore
|
|
Changelogs are relatively small and we have no hashes for them, but we
had partial support for them before, so lets stick to it.
This also deletes the (partial) file before moving the downloaded file
into its place – rename(2) should be doing this by itself, but testing
on semaphoreci suggests that this isn't always the case (error is "Stale
file handle") and we don't need an atomic replace here, so be explicit.
Git-Dch: Ignore
|
|
If the architecture list is empty somehow, fail normally.
LP: #1549819
|
|
The EDSP output generated by apt didn't include the versioned provides
information so that every provides looked like an unversioned one in the
eyes of an external resolver.
|
|
pkgAcqChangelog has the default behaviour of downloading a changelog to
a temporary directory (inside /tmp, not /tmp directly), which is cleaned
up on shutdown, but this can be overridden to store the changelog more
permanently – but that caries a permission problem.
For changelog we can 'easily' solve this by always downloading to a
temporary directory and only move it out of there on done.
|
|
If pkgAcqChangelog is told to acquire the changelog for a version it
will check first if this version is installed on the disk and if so will
use the local changelog in /usr/share/doc (possibily/likely gz
compressed) instead of downloading the file from the web.
An option is provided to disable this, which is enabled by default for
the Ubuntu vendor as they truncate the local changelogs – and for apts
--print-uris action.
|
|
Otherwise the test run as root fails seeing the
W: Can't drop privileges for downloading as file 'foo_1.tar.gz' couldn't be
accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
warning in a command which isn't supposed to warn.
One trivial test, two fixups and still counting…
Git-Dch: Ignore
|
|
Travis still uses a dpkg version which defaults to gz and as which
compression is picked isn't all to important as long as one is just
accept any.
Git-Dch: Ignore
|
|
Regression introduced in a249b3e6fd798935a02b769149c9791a6fa6ef16, which
in the case of an invalid cache would build the first part unlocked and
later pick up the (still unlocked) cache for further processing, so the
system got never locked and apt would end up complaining about being
unable to release the lock at shutdown.
The far more common case of having a valid cache worked as expected and
hence covered up the problem – especially as tests who would have
noticed it are simulations only, which do not lock.
Closes: 814139
Reported-By: Balint Reczey <balint@balintreczey.hu>
Reported-By: Helmut Grohne <helmut@subdivi.de> on IRC
|
|
|
|
We don't need the dependencies for obvious reasons and we don't need the
candidate version either, so building a pkgDepCache is wasted effort,
which we can stop doing now that build-dep cleared the path.
|
|
The later just calls the earlier, but the later needs the fullblown
dependency cache to be initialized, which is a very costly operation and
isn't done anymore that early in the run as we would need to throw away
and rebuild it again after we got all the information about source pkgs.
As we end up with a nullptr for the pkgDepCache, we use a slightly
longer calling convention to make sure that we use the pkgCache
directly, avoiding nullptr induced segfaults and costly operations.
Git-Dch: Ignore
Reported-By: Balint Reczey <balint@balintreczey.hu>
|
|
Git-Dch: Ignore
|
|
The Date field in the Release file is useful to avoid allowing an
attacker to 'downgrade' a user to earlier Release files (and hence to
older states of the archieve with open security bugs). It is also needed
to allow a user to define min/max values for the validation of a Release
file (with or without the Release file providing a Valid-Until field).
APT wasn't formally requiring this field before through and (agrueable
not binding and still incomplete) online documentation declares it
optional (until now), so we downgrade the error to a warning for now to
give repository creators a bit more time to adapt – the bigger ones
should have a Date field for years already, so the effected group should
be small in any case.
It should be noted that earlier apt versions had this as an error
already, but only showed it if a Valid-Until field was present (or the
user tried to used the configuration items for min/max valid-until).
Closes: 809329
|
|
In 321213f0dcdcdaab04e01663e7a047b261400c9c Andreas Cadhalpun corrected
the incorrect overriding of earlier better-fitting results with later
(semi-)matches – but that broke the case in which packages are in multiple
releases in the same version (and the user has both releases configured).
Closes: 812497
|
|
In commit a221efc331693f8905da870141756c892911c433 I promoted the source
package name and version to the binary cache for faster access by e.g.
EDSP, but due to changing the interpretation length to soon we always
ignored the version part of the Source field, so that packages ended up
having the binary version as source version – which while usually just
fine it is wrong for binary rebuilds.
Closes: 812492
|
|
Git-Dch: Ignore
|
|
build-dep was implemented by parsing the build-dependencies of a package
and figuring out which packages to install/remove based on this. That
means that for the first level of dependencies build-dep was
implementing its very own resolver with all the benefits (aka: bugs)
this gives us for not using the existing resolver for all levels.
Making this work involves generating a dummy binary package with fitting
Depends and Conflicts and as we can't create them out of thin air the
cache generation needs to be involved so we end up writing a Packages
file which we want to parse – after we have parsed the other Packages
files already. With .dsc/.deb files we could add them before we started
parsing anything.
With a bit of care we can avoid generating too much data we have to
throw away again (as many parts assume that e.g. the count of packages
doesn't change midair), so that on a speed front there shouldn't be
much of a difference, but output can be slightly confusing as if we have
a completely valid cache on disk the "Reading package lists... Done" is
printed two times – but apt is pretty quick about it in that case.
Closes: #137560, #444930, #489911, #583914, #728317, #812173
|
|
To resolve dependencies like "pkg:arch" we create a package with the
name "pkg:arch" and the architecture "any". We create these packages
only if a dependency needs it as these kind of dependencies aren't that
common. This commit ensured that in the even this architecture specific
dependency is the only relation this package has we still create the
underlying package to have them available in provides resolution.
|
|
Introduced in 9d2a8a7388cf3b0bbbe92f6b0b30a533e1167f40 apt tries to
merge actions like downloading the same (as judged by hashes) file
into doing it once. The implementation was very simple in that it isn't
planing at all. Turns out that it works 90% of the time just fine, but
has issues in more complicated situations in which items can be in
different stages downloading different files emitting potentially the
"wrong" hash – like while pdiffs are worked on we might end up copying
the patch instead of the result file giving us very strange errors in
return. Reverting the change until we can implement a better planing
solution seems to be the best course of action even if its sad.
Closes: 810046
|
|
Architectures for packages which do not belong to the native nor a
foreign architecture (dubbed barbarian for now) which are marked
M-A:foreign still provide in their own architecture even if not for
others. Also, other M-A:foreign (and allowed) packages provide in these
barbarian architectures.
|
|
Closes: #734922
|
|
This prevents a test suite failure on systems with weird umasks.
Also set umask 000 at the beginning so we can actually check for
that anywhere.
Gbp-Dch: ignore
|
|
Just enabling it for anyone breaks with HTTP/1.0 servers and
proxies sometimes.
Closes: #810796
|
|
The code already deals with compressed leftovers, but forgot the
uncompressed files. The opertunity is picked to reorder this code and
add debug messages about the actions taken as well as produce such a
leftover file in the associated testcase.
|
|
With the addition of the $HASH-Download field in the .diff/Index we got
the size of the compressed patches for 'free', so if that information is
available we can use it for a more fitting calculation of the size
requirements of the patches vs. the complete file.
Note that this predicts a too small size in the transition case in which
the information isn't available for all patches, but figuring this out
would be a lot of code for practically nothing as only one update can
ever be in such a transition phase.
|
|
Some (older) versions of bash seem to be allergic to a method named
"aptautotest_grep_^apt" (note the caret). Unlikely that we are going to
write autotests for such commands so we could just skip those, but lets
instead just use "normal" characters in the names and strip the rest as
we already did with the (arguable more common) '-'.
|
|
This way it works more similar to the compressor binaries, which we
can relief in this way from their job in the test framework avoiding the
need of adding e.g. liblz4-tool to the test dependencies.
|
|
Downloading and storing are two different operations were different
compression types can be preferred. For downloading we provide the
choice via Acquire::CompressionTypes::Order as there is a choice to
be made between download size and speed – and limited by whats available
in the repository.
Storage on the other hand has all compressions currently supported by
apt available and to reduce runtime of tools accessing these files the
compression type should be a low-cost format in terms of decompression.
apt traditionally stores its indexes uncompressed on disk, but has
options to keep them compressed. Now that apt downloads additional files
we also deal with files which simply can't be stored uncompressed as
they are just too big (like Contents for apt-file). Traditionally they
are downloaded in a low-cost format (gz) as repositories do not provide
other formats, but there might be even lower-cost formats and for
download we could introduce higher-cost in the repositories.
Downloading an entire index potentially requires recompression to
another format, so an update takes potentially longer – but big files
are usually updated via pdiffs which has to de- and re-compress anyhow
and does it on the fly anyhow, so there is no extra time needed and in
general it seems to be benefitial to invest the time in update to save
time later on file access.
|
|
Less hardcoding should help while introducing new compressors.
Git-Dch: Ignore
|
|
There is no reason to enforce that the file we start the bootstrap with
is compressed with a compressor which is available online. This allows
us to change the on-disk format as well as deals with repositories
adding/removing support for a specific compressor.
|