Age | Commit message (Collapse) | Author |
|
If we have a (e.g. locally built) deb file installed and do try to
install it again apt complained about this being a downgrade, but it
wasn't as it is the very same version… it was just confused into not
merging the versions together which looks like a downgrade then.
The same size assumption is usually good, but given that volatile files
are parsed last (even after the status file) the base assumption no
longer holds, but is easy to adept without actually changing anything in
practice.
(cherry picked from commit e7edb2fef8370d54a4b8e5a01266e6eda81ef84e)
|
|
Traditionally all providers are protected providing something as apt
can't know which of them is actually really providing the functionality
for the user ensuring that we don't propose the removal of used stuff,
but that is of course also keeping stuff around which could be removed.
That can cause the collection of multiple old providers until the
provided package is itself no longer needed (e.g. out-of-tree kernel
modules). We combat this by marking providers only from the newest
source package version so that old providers built by older versions of
the same source package can be garbage collected.
(cherry picked from commit a0ed43f7323b9d7976ed0ba8d437a42e24af9eaf)
|
|
Regression introduced in 8f858d560e3b7b475c623c4e242d1edce246025a.
Commands are probably better of always having output through as the
fall through to the generic proxy settings is likely not intended. As
documenting and implementing this more consistently is kind of a
regression through, it is split off into the next commit.
Closes: 827713
(cherry picked from commit cad1877559f3e1703c3fea4d081978e1b4bb4a0e)
|
|
Seen first in #826783, but as this buglog also shows leaked uncompressed
files as well we don't close it just yet.
(cherry picked from commit 6f35be91c9e86e463bca7df6eadf05412c7b732c)
|
|
This effects only compressors configured on the fly (rather then the
inbuilt ones as they use a library).
(cherry picked from commit bdc42211700ef0f6f40e4ef3f362e52d684d70fb)
|
|
Setting the C++ locale via std::locale::global(std::locale("")); which
would otherwise default to the default C locale (aka: unaffected by
setlocale) effects the formatting of numeric types in IO streams, which
for output for humans is perfectly sensible, but breaks our many text
interfaces used and parsed by us and others without expecting the
numbers to be formatted.
Closes: #825396
(cherry picked from commit b58e2c7c56b1416a343e81f9f80cb1f02c128e25)
|
|
The report mentions "apt list --upgradable", but there are others which
have inconsistent behavior ranging from segfaulting to doing something
with the partial (and hence incomplete) data. We had a recent report
about sources.list (#818628), this one mentions prefences, the obvious
next step is conf files… so the testcase is adapted to check for all
three in file and directory versions and run a bunch of commands each
time which should all have more or less the same behavior in such a case
(aka error out).
Closes: 824503
(cherry picked from commit fdf9eef4d96a18d0167708499c993e1174251e88)
|
|
Using Pkg.CandVersion() here is wrong as its implementation will return
a candidate based just on the default policy settings ignoring user
preferences and otherwise set candidates (aka: it sidesteps the
pkgDepCache).
This causes M-A:same libraries to be detected as screwed even through
they aren't, so that they end up being kept back.
Reported-By: Felipe Sateler on IRC
|
|
Always those silly mistakes. Do what I mean, not what I said…
Reported-By: Travis
Git-Dch: Ignore
(cherry picked from commit 737ce3135d332e3b6165ac1fac5c68e21ba1bdba)
|
|
Failures can happen and APT regardless will do a partial cache
update anyway. Because APT ensures that the list directory is
in a sane state, it makes sense to also call success hooks if
success was only partial - otherwise it loses sync with APT.
Most importantly, this causes the appstream cache to be empty,
see launchpad bug #1562733.
This is somewhat overly optimistic though: As soon as any repository
has nonexisting optional files, the missing optional files are also
treated as success, which means a single broken repository without an
InRelease file still runs Success hooks, even though it really should
not.
(cherry picked from commit 35664152e47a1d4d712fd52e0f0a2dc8ed359d32)
|
|
Versions which are only available in dpkg/status aren't installable and
apt doesn't pick them as candidate for this reason – for the same reason
such packages shouldn't be sent to an external solver via EDSP. The
packages are pinned to -1, but if the solver has strict pinning disabled
it could end up picking this version anyhow – which is a request apt can
not satisfy.
Reported-By: Maximiliano Curia <maxy@debian.org> on IRC
(cherry picked from commit 33190fe3d3c200dcd417cd336f9db11f5f4408d5)
|
|
Broken in a4b8112b19763cbd2c12b81d55bc7d43a591d610.
If an item has a description which includes no space and is redirected
to another mirror the code which wants to rewrite the description
expects a space in there, but can't find it and the unguarded substr
command on the string will fail with an exception thrown…
Guarding it properly and everything is fine.
(cherry picked from commit 84ac6edfabe1c92d67e8d441e04216ad33c89165)
|
|
Daniel Kahn Gillmor highlights in the bugreport that security isn't
improving by having the user import additional keys – especially as
importing keys securely is hard.
The bugreport was initially about dropping the warning to a notice, but
in given the previously mentioned observation and the fact that we
weren't printing a warning (or a notice) for expired or revoked keys
providing a signature we drop it completely as the code to display a
message if this was the only key is in another path – and is considered
critical.
Closes: 618445
(Backported from commit fb7b11ebb852fa255053ecab605bc9cfe9de0603)
|
|
Redesign of multivalue options in 463c8d801595ce5ac94d7c032264820be7434232
caused the parser to look for <multivalue>{Add,Remove} (no hyphen)
instead of the expected <multivalue>-{Add,Remove}.
(cherry picked from commit f5585106d61b381c9dcf8f1dd48c742dc68f6c81)
|
|
Tested via (newly) empty index files, but effects also files dropped
from the repository or an otherwise changed repository config.
|
|
There is just no point in taking the time to acquire empty files –
especially as it will be tiny non-empty compressed files usually.
|
|
With the previous fix for file applied we can again hit repositories
which contain uncompressed empty files, which since the introduction of
the central store: method wasn't accounted for anymore as we forbid
empty compressed files.
|
|
A silly of-by-one error in the stripping of the extension to check for
the uncompressed filename broken in an attempt to support all
compressions in commit a09f6eb8fc67cd2d836019f448f18580396185e5.
Fixing this highlights also mistakes in the handling of the Alt-Filename
in libapt which would cause apt to remove the file from the repository
(if root has the needed rights – aka the disk isn't readonly or similar)
|
|
With the previous commit we track the state of transactions, so we can
now use our knowledge to avoid processing data for a transaction which
was already closed (via an abort in this case).
This is needed as multiple independent processes are interacting in the
process, so there isn't a simple immediate full-engine stop and it would
also be bad to teach each and every item how to check if its manager
has failed subordinate and what to do in that case.
In the pdiff case, which deals (potentially) with many items during its
lifetime e.g. a hashsum mismatch in another file can abort the
transaction the file we try to patch via pdiff belongs to. This causes
some of the items (which are already done) to be aborted with it, but
items still in the process of acquisition continue in the processing and
will later try to use all the items together failing in strange ways as
cleanup already happened.
The chosen solution is to dry up the communication channels instead by
ignoring new requests for data acquisition, canceling requests which are
not assigned to a queue and not calling Done/Failed on items anymore.
This means that e.g. already started or pending (e.g. pipelined)
downloads aren't stopped and continue as normal for now, but they remain
in partial/ and aren't processed further so the next update command will
pick them up and put them to good use while the current process fails
updating (for this transaction group) in an orderly fashion.
Closes: 817240
Thanks: Barr Detwix & Vincent Lefevre for log files
|
|
Introduces APT::Hashes::<NAME> with entries Untrusted and Weak
which can be set to true to cause the hash to be treated as
untrusted and/or weak.
|
|
Use msgtest and testsuccess with a function instead of failing
with a simple exit 1. This looks nicer.
Gbp-Dch: ignore
|
|
This gets rid of byte-range requests and 416 responses.
Gbp-Dch: ignore
|
|
This should make the test less flaky, as with a small file,
we might have already received all the data before trying
to apply rate limits which is a constant source of failure
on the i386 Ubuntu autopkgtest.
|
|
If the package is marked for removal, keep it marked for
removal and do not mark it for keep. If we mark it for keep,
we some how later get to a different stage where it is marked
for unpack instead of removal.
In the example in the bug report, we would get a:
SmartUnPack maas-region-controller-min:amd64 (replace version 2.0.0~alpha3+bzr4810-0ubuntu1 with Segmentation fault
maas-region-controller-min:amd64 was marked for removal, but
we changed it to keep and somehow it thinks that this is to
be replaced now instead of removed (probably because the
InstallVer != CandidateVer [with InstallVer = 0]).
This fixes a regression introduced in release 1.2.7, commit:
0390edd5452b081f8efcf412f96d535a1d959457
Reported-by: LaMont Jones on IRC
LP: #1562402
|
|
|
|
Our own gpgv method can declare a digest algorithm as untrusted and
handles these as worthless signatures. If gpgv comes with inbuilt
untrusted (which is called weak in official terminology) which it e.g.
does for MD5 in recent versions we should handle it in the same way.
To check this we use the most uncommon still fully trusted hash as a
configureable one via a hidden config option to toggle through all of
the three states a hash can be in.
|
|
Using erase(pos) is invalid in our case here as pos must be a valid and
derefenceable iterator, which isn't the case for an end-iterator (like
if we had no good signature).
The problem runs deeper still through as VALIDSIG is a keyid while
GOODSIG is just a longid so comparing them will always fail.
Closes: 818910
|
|
On launchpad #1558484 a user reports that @ in the authentication tokens
parsing of sources.list isn't working in an older (precise) version. It
isn't the recommended way of specifying passwords and co (auth.conf is),
but we can at least test for regressions (and in this case test at all…
who was that "clever" boy disabling a test with exit……… oh, nevermind.
Git-Dch: Ignore
|
|
Otherwise, things will just start failing later down the stack,
because (a) the lazy getters do not check if building was successful
and (b) any further getter call would return the invalid object
anyway.
Also initialize VS in pkgCache to nullptr by default.
Closes: #818628
|
|
There is no point in resolving all addresses to their names, this
just seriously slows the setup phase down. So just pass -n to not
resolve names anymore.
Gbp-Dch: ignore
|
|
This should make the test less flaky and hopefully fix the failure
on Ubuntu's armhf CI nodes.
Gbp-Dch: ignore
|
|
The test is a bit flaky. In order to get it less flaky, reduce
the speed in each run. To compensate for issues, start with a
higher speed level. Also increase the number of runs to 10.
Furthermore, http get the same multiple-run loop, and the log
files are changed to indicate the protocol being tested, as it's
not obvious which one fails if it fails in quiet mode.
Gbp-Dch: ignore
|
|
The epoch stripping in this code is done since day one, but in other
places we show a version epochs are not stripped. If epochs are present
in packages they tend to be an important information which we can't just
drop and especially can't drop "sometimes" as that confuses users and
tools alike – so even if removing code in use for (close to) 18 years
feels wrong, it is probably the right choice for consistency.
Closes: 818162
|
|
This makes it easier to understand what really is an error
and what not.
|
|
For the non-pdiff case, we have can have accurate progress
reporting because after fetching the {,In}Release files we know
how many IndexFiles will be fetched and what size they have.
Therefore init the filesize early (in pkgAcqIndex::Init) and
ensure that in Acquire::Pulse() looks at already downloaded
bits when calculating the progress in Acquire::Pulse.
Also improve debug output of Debug::acquire::progress
|
|
Git-Dch: Ignore
|
|
The problemresolver will set the candidate version for pkg P back
to the current version if it encounters an impossible to satisfy
critical dependency on P. However it did not set the State of
the package back as well which lead to a situation where P is
neither in Keep,Install,Upgrade,Delete state.
Note that this can not be tested via the traditional sh based
framework. I added a python-apt based test for this.
LP: #1550741
[jak@debian.org: Make the test not fail if apt_pkg cannot be
imported]
|
|
This was wrong and caused some issues because apt-key invoked
host apt-config with our library.
Gbp-Dch: ignore
|
|
This makes the test suite safe if we ever need to reject SHA1
signatures in an update.
|
|
The structure we parse the data into has a dedicated size field, but it
tends to be easier to handle it as a (very weak) checksum.
|
|
The URI descibing an item can change via mirrors/redirectors which
causes the .diff/Index files to get the wrong names in storage.
Git-Dch: Ignore
|
|
All other interactions with std::cout are flushed directly, just in the
stop case we hadn't done it – no problem expect if there is still output
coming after apt is done like in the case of a post-invoke script
producing output.
Closes: 793672
|
|
Now that we ignore SHA1-only files it makes sense to require also the
provision of hashes for the compressed patches as this was introduced in
the same patchset as support for non-SHA1 hashes in the file itself in
dak and adding support in other archive creators (if they support pdiffs
at all) will likely be in the same batch.
The reason for the change itself is simple: If you are 'scared' enough
about the security of SHA1, you shouldn't uncompress a file you haven't
verified at all – after all, it could be exploiting a bug or a zip bomb.
|
|
Given that we refuse to use SHA1-only .diff/Indexes no point in shipping
and running code which pretends to check support for it which given that
all these tests are run 3 times eats a noticeable amount of time.
Git-Dch: Ignore
|
|
Ensure that .diff/Index files that only contain SHA1 values and no
SHA2 values are not used.
|
|
SHA1 is not reasonably secure anymore, so we should not consider it
usable anymore. The test suite is adjusted to account for this.
|
|
Using amd64 broke the test case on non-amd64 architectures. Query
the native architecture from dpkg and use that instead.
The definition of NATIVE is copied from the test
test-architecture-specification-parsing.
|
|
This effectively merges branch 'typofixes-vlajos-20150807' of github.com:vlajos/apt
with the following commit:
commit 13cacb3e2e2352ba701e769fc889e3344fabbf7e
Author: Veres Lajos <vlajos@gmail.com>
Date: Sun Aug 9 00:12:53 2015 +0100
typofix - https://github.com/vlajos/misspell_fixer
It has been rebased for a better commit message.
|
|
If a single pdiff fails, we have to fail the entire patching endeavour
and fall back to getting the complete file instead. That is easy in
serverside merged pdiffs as we get them one by one. For clientside we
get them all at once through, which means that a failure in one has to
stop the entire pipeline, which works as expected (as proven by the
bugreporters as they don't even notice it happening). The problem is
just that the first failing pdiff will do the cleanup, so another pdiff
which happens to be successfully acquired after we processed the failure
doesn't find the file it is supposed to use as a basename anymore, so
the patch is renamed to what should be the unique extension and moved
into the current working directory. Processing is then stopped as the
patch realizes that it isn't the last one which completed downloading.
On the plus side this means this is neither us using a bad temporary
location nor a security problem. It "just" overrides unconditionally
files in your current working directory (if you happen to have them
named like a pdiff patch – a bit unlikely perhaps) and so drops files
there which are never used again.
I guess this was introduced in 4e3c5633b1e74b4f58b95f339cfbbf4cbf21ab3e
for real as I made the need for the existence of the base file rather
explicit, but the potential lingers in the code for far longer.
Closes: #816837
|
|
Fixed in f7bd44bae0d7cb7f9838490b5eece075da83899e already, but the
commit misses the Closes tag and while we are at it we can add a simple
regression test and micro-optimize it a bit.
Thanks: James McCoy for the suggestion.
Closes: 816691
|