Age | Commit message (Collapse) | Author |
|
Daniel Kahn Gillmor highlights in the bugreport that security isn't
improving by having the user import additional keys – especially as
importing keys securely is hard.
The bugreport was initially about dropping the warning to a notice, but
in given the previously mentioned observation and the fact that we
weren't printing a warning (or a notice) for expired or revoked keys
providing a signature we drop it completely as the code to display a
message if this was the only key is in another path – and is considered
critical.
Closes: 618445
|
|
This basically introduces ~33 flags in the output, but a package can
have only ~11 of them displayed at the same time. There is quiet a bit
of duplication also (an uninstalled package is by definition a
newinstall if its getting installed), but as this is debug output we are
better of showing them all in case one of them isn't set in a way it is
supposed to be set.
Git-Dch: Ignore
|
|
Redesign of multivalue options in 463c8d801595ce5ac94d7c032264820be7434232
caused the parser to look for <multivalue>{Add,Remove} (no hyphen)
instead of the expected <multivalue>-{Add,Remove}.
|
|
The old prettyprinters have only access to the struct they pretty print,
which isn't enough usually as we want to know for a package also a bit
of state information like which version is the candidate.
We therefore need to pull the DepCache into context and hence use a
temporary struct which is printed instead of the iterator itself.
|
|
This method does not return the 'current' candidate of the DepCache
which would be most expected, but instead returns the version which
would be candidate in a default-only policy setting – aka ignoring
apt_preferences settings and co.
|
|
Using Pkg.CandVersion() here is wrong as its implementation will return
a candidate based just on the default policy settings ignoring user
preferences and otherwise set candidates (aka: it sidesteps the
pkgDepCache).
This causes M-A:same libraries to be detected as screwed even through
they aren't, so that they end up being kept back.
Reported-By: Felipe Sateler on IRC
|
|
If the file is in a failed state there is no point in trying to flush
out the buffers as the file is to be discarded anyhow & its likely all
this flushing is producing is additional error messages.
Git-Dch: Ignore
|
|
Gbp-Dch: ignore
|
|
Bye bye old friend. You're in one Ubuntu LTS release for compat
testing, now we do not need you anymore.
|
|
Broken in the previous commit (69cea1ef2cfda3c4da79fd756a8edaf2be26998e).
Adding a test and a comment to avoid future embarrassment.
Git-Dch: Ignore
Reported-By: Julian Andres Klode on IRC
|
|
It would previously return a pin of 0, which is an invalid value, but
the intend is that versions which are only in the dpkg/status file can't
be selected for installation (= can't be a candidate) which is what a
negative pin assures.
This helps with the communication to EDSP solvers as they neither know
about the rc-state (yet) nor that they shouldn't choose this version.
Ideally they shouldn't be told about such versions at all as there is
nothing to be solved here, but we will get there eventually.
|
|
|
|
Redirection services like httpredir.debian.org tend to use a set of
mirrors from which they pick a mirror at "random" for each requested
file, which is usually benefitial for the download of debs, but for the
index files this can quickly cause problems (aka hashsum mismatches) if
the two (or more) mirrors involved are only slightly out-of-sync.
This commit "resolves" this issue by using the mirror we ended up using
to get the (signed) Release file directly to get the index files
belonging to this Release file instead of asking the redirection
service which eliminates the risk of hitting out-of-sync mirrors.
As an obvious downside the redirection service can't serve partial
mirrors anymore for indexes and the download of indexes indexed in the
same Release file can't be done in parallel (from different mirrors).
This does not effect the download of non-index files like deb-files as
out-of-sync mirrors aren't a huge problem there, so the parallel
download outweights a potentially 404 error (also because this causes no
errenous downloads while hashsum mismatches download the entire file
before finding out that it was pointless).
The rational for this is that indexes are relative to the Release file.
If we would be talking about a HTML page including images, such a
behaviour is obvious and intended – not doing it means in the best case
a bunch of "useless" requests which will all be answered with a
redirect.
|
|
They are the small brothers of the hashsum mismatch, so they deserve a
similar treatment even through we have for architectual reasons not a
much to display as for hashsum mismatches for now.
|
|
Users tend to report these errors with just this error message… not very
actionable and hard to figure out if this is a temporary or 'permanent'
mirror-sync issue or even the occasional apt bug.
Showing the involved hashsums and modification times should help in
triaging these kind of bugs – and eventually we will have less of them
via by-hash.
The subheaders aren't marked for translation for now as they are
technical glibberish and probably easier to deal with if not translated.
After all, our iconic "Hash Sum mismatch" is translated at least.
These additions were proposed in #817240 by Peter Palfrader.
|
|
|
|
Calling the (non-existent) reporter multiple times for the same error
with different codes for the same error (e.g. hashsum) is a bit strange.
It also doesn't need to be a public API. Ideally that would all look and
behave slightly different, but we will worry about that at the time this
is actually (planed to be) used somewhere…
Git-Dch: Ignore
|
|
Queues feeding workers like rred are created in a random pattern to get
a few of them to run in parallel – but if we already have an idling queue
we don't need to assign it to a (potentially new) random queue as that
saves us the (agruably small) overhead of starting up a new queue,
avoids adding jobs to an already busy queue while others idle and as
a bonus reduces the size of debug logs a bit.
We also keep starting new queues now until we reach our limit before
we assign work at random to them, which should give us a more effective
utilisation overall compared to potentially adding work to busy queues
while we haven't reached our queue limit yet.
|
|
Tested via (newly) empty index files, but effects also files dropped
from the repository or an otherwise changed repository config.
|
|
There is just no point in taking the time to acquire empty files –
especially as it will be tiny non-empty compressed files usually.
|
|
A silly of-by-one error in the stripping of the extension to check for
the uncompressed filename broken in an attempt to support all
compressions in commit a09f6eb8fc67cd2d836019f448f18580396185e5.
Fixing this highlights also mistakes in the handling of the Alt-Filename
in libapt which would cause apt to remove the file from the repository
(if root has the needed rights – aka the disk isn't readonly or similar)
|
|
Regression introduced in commit 590f1923121815b36ef889033c1c416a23cbe9a2
(2011!) causing apt to not check if Pre-Depends are satisfied before
calling --configure. This managed to hide so perfectly well for years as
Pre-Depends aren't that common, apt prefers upgrading these packages
first and checks for satisfaction is already in SmartUnpack, so there
is only a small window of oppertunity to break a pre-dependency relation
(usually with an unpack).
Verified by logchecking with two provided status files in the buglog.
I would have liked to write a test, but I wasn't able to reach the needed
complexity to get apt to fail – but the change is small and reasonable,
so what could possible go wrong™, right?
LP: #1569099
|
|
It handy to be able to point apt at reading a compressed dpkg/status
file in debugging cases, which worked pre-1.1 but somewhere down the
line in the massive refactoring. Restoring this behavior in a central
place for all realfile index files instead of just for the status file.
(This has no effect on index files acquired from an archive – those are
handled by different classes and support compressed files just fine)
|
|
There is a good chance that the attempt will fail, but if a user
mentions certain packages explicitly on the commandline there is a
chance that this will consist of a broken system which is resolved
by upgrading more packages then just the mentioned.
This limitation was not effecting external resolvers.
|
|
With the previous commit we track the state of transactions, so we can
now use our knowledge to avoid processing data for a transaction which
was already closed (via an abort in this case).
This is needed as multiple independent processes are interacting in the
process, so there isn't a simple immediate full-engine stop and it would
also be bad to teach each and every item how to check if its manager
has failed subordinate and what to do in that case.
In the pdiff case, which deals (potentially) with many items during its
lifetime e.g. a hashsum mismatch in another file can abort the
transaction the file we try to patch via pdiff belongs to. This causes
some of the items (which are already done) to be aborted with it, but
items still in the process of acquisition continue in the processing and
will later try to use all the items together failing in strange ways as
cleanup already happened.
The chosen solution is to dry up the communication channels instead by
ignoring new requests for data acquisition, canceling requests which are
not assigned to a queue and not calling Done/Failed on items anymore.
This means that e.g. already started or pending (e.g. pipelined)
downloads aren't stopped and continue as normal for now, but they remain
in partial/ and aren't processed further so the next update command will
pick them up and put them to good use while the current process fails
updating (for this transaction group) in an orderly fashion.
Closes: 817240
Thanks: Barr Detwix & Vincent Lefevre for log files
|
|
We want to keep track of the state of a transaction overall to base
future decisions on it, but as a pre-requirement we have to make sure
that a transaction isn't commited twice (which happened if the download
of InRelease failed and Release takes over).
It also happened to create empty commits after a transaction was already
aborted in cases in which the Release files were rejected.
This isn't effecting security at the moment, but to ensure this isn't
happening again and can never be bad a bunch of fatal error messages are
added to make regressions on this front visible.
|
|
Git-Dch: Ignore
Reported-By: gcc -fsanitize=address
|
|
Hardly noticeable, but given that we have the option to easily enable
it, lets enable it as every newline in the message is written
individually by the code.
|
|
Some methods had it missing, some used the keyword directly, which isn't
a problem as it is a cc file, but for consistency lets stick to our
macro for now.
Git-Dch: Ignore
|
|
Introduces APT::Hashes::<NAME> with entries Untrusted and Weak
which can be set to true to cause the hash to be treated as
untrusted and/or weak.
|
|
If the package is marked for removal, keep it marked for
removal and do not mark it for keep. If we mark it for keep,
we some how later get to a different stage where it is marked
for unpack instead of removal.
In the example in the bug report, we would get a:
SmartUnPack maas-region-controller-min:amd64 (replace version 2.0.0~alpha3+bzr4810-0ubuntu1 with Segmentation fault
maas-region-controller-min:amd64 was marked for removal, but
we changed it to keep and somehow it thinks that this is to
be replaced now instead of removed (probably because the
InstallVer != CandidateVer [with InstallVer = 0]).
This fixes a regression introduced in release 1.2.7, commit:
0390edd5452b081f8efcf412f96d535a1d959457
Reported-by: LaMont Jones on IRC
LP: #1562402
|
|
|
|
This avoids templates using StringView to be exported, such as
std::vector<StringView*>::emplace_back().
Gbp-Dch: ignore
|
|
There is really no need to have the same code three times.
Git-Dch: Ignore
|
|
Otherwise, things will just start failing later down the stack,
because (a) the lazy getters do not check if building was successful
and (b) any further getter call would return the invalid object
anyway.
Also initialize VS in pkgCache to nullptr by default.
Closes: #818628
|
|
The epoch stripping in this code is done since day one, but in other
places we show a version epochs are not stripped. If epochs are present
in packages they tend to be an important information which we can't just
drop and especially can't drop "sometimes" as that confuses users and
tools alike – so even if removing code in use for (close to) 18 years
feels wrong, it is probably the right choice for consistency.
Closes: 818162
|
|
This makes the new GPG related warnings much nicer to read,
for example, the second one here replaces the first one:
W: gpgv:/var/lib/apt/lists/example.com_dists_stable_InRelease: Weak ...
W: http://example.com/dists/stable/InRelease: Weak ...
|
|
This makes it easier to understand what really is an error
and what not.
|
|
For the non-pdiff case, we have can have accurate progress
reporting because after fetching the {,In}Release files we know
how many IndexFiles will be fetched and what size they have.
Therefore init the filesize early (in pkgAcqIndex::Init) and
ensure that in Acquire::Pulse() looks at already downloaded
bits when calculating the progress in Acquire::Pulse.
Also improve debug output of Debug::acquire::progress
|
|
The problemresolver will set the candidate version for pkg P back
to the current version if it encounters an impossible to satisfy
critical dependency on P. However it did not set the State of
the package back as well which lead to a situation where P is
neither in Keep,Install,Upgrade,Delete state.
Note that this can not be tested via the traditional sh based
framework. I added a python-apt based test for this.
LP: #1550741
[jak@debian.org: Make the test not fail if apt_pkg cannot be
imported]
|
|
This can be used by workers to send warnings to the main
program. The messages will be passed to _error->Warning()
by APT with the URI prepended.
We are not going to make that really public now, as the
interface might change a bit.
|
|
The structure we parse the data into has a dedicated size field, but it
tends to be easier to handle it as a (very weak) checksum.
|
|
The URI descibing an item can change via mirrors/redirectors which
causes the .diff/Index files to get the wrong names in storage.
Git-Dch: Ignore
|
|
The (unlikely) waitpid failure case should fallthrough the code just
like the other failures (and successes) instead of taking a shortcut
avoiding all the cleanup (progress) and finishing touches (log, state).
This also delays the cleanup of the progress until apt is really done
with everything and "just" has the post-invokes left to do, so the
period of 'apt looks finished as it stopped the progress' and 'apt
really finished as I have the shell-prompt back' is shorter even if
there is no progress reported anymore, so the bar lingers at 100%…
Ideally even the post-invokes would be covered by progress, but they
can have their own output and dealing with that could be hard.
Git-Dch: Ignore
|
|
All other interactions with std::cout are flushed directly, just in the
stop case we hadn't done it – no problem expect if there is still output
coming after apt is done like in the case of a post-invoke script
producing output.
Closes: 793672
|
|
Now that we ignore SHA1-only files it makes sense to require also the
provision of hashes for the compressed patches as this was introduced in
the same patchset as support for non-SHA1 hashes in the file itself in
dak and adding support in other archive creators (if they support pdiffs
at all) will likely be in the same batch.
The reason for the change itself is simple: If you are 'scared' enough
about the security of SHA1, you shouldn't uncompress a file you haven't
verified at all – after all, it could be exploiting a bug or a zip bomb.
|
|
SHA1 is not reasonably secure anymore, so we should not consider it
usable anymore. The test suite is adjusted to account for this.
|
|
Dynamically allocate KillList in order to avoid an overflow when
more than 100 elements would be written to it.
This happened while playing around with the status file from
Bug#701069 on a modern system.
|
|
This effectively merges branch 'typofixes-vlajos-20150807' of github.com:vlajos/apt
with the following commit:
commit 13cacb3e2e2352ba701e769fc889e3344fabbf7e
Author: Veres Lajos <vlajos@gmail.com>
Date: Sun Aug 9 00:12:53 2015 +0100
typofix - https://github.com/vlajos/misspell_fixer
It has been rebased for a better commit message.
|
|
Mysteriously segfaults only on i386 for me, but at least one reporter
had the same behavior and it makes sense that this is the problem as the
parsing of Source: was fixed in 1.2.2 – before the not remapped group
was not used.
We don't use our usual Dynamic<> trick here as we don't have it in the
parser. Its a bit of a layer violation to do this parsing here, but its
how it is always was…
Until next time with this lovely kind of problem.
Closes: 812251
Thanks: Francesco Poli and Marc Haber for testdata.
|