Age | Commit message (Collapse) | Author |
|
The file we read will always be a Release file as the clearsign is
stripped earlier in this method, so this check is just wasting CPU
Its also removing the risk that this could ever be part of a valid
section, even if I can't imagine how that should be valid.
Git-Dch: Ignore
|
|
We start your quest by using the version of a package applying to a
specific pin, but that version could very well be below the current
version, which causes APT to suggest a downgrade even if it is
advertised that it never does this below 1000.
Its of course questionable what use a specific pin on a package has
which has a newer version already installed, but reacting with the
suggestion of a downgrade is really not appropriated (even if its kinda
likely that this is actually the intend the user has – it could just as
well be an outdated pin) and as pinning is complicated enough we should
atleast do what is described in the manpage.
So we look out for the specific pin and if we haven't seen it at the
moment we see the installed version, we ignore the specific pin.
Closes: 543966
|
|
|
|
|
|
The rational from the buglog:
> The problem here is that the Priority field in one of the Packages files
> is incorrect due to a mishap with reprepro configuration, […] the
> amd64 version is Priority: standard but the arm64 version is Priority:
> optional (and has a stray "optional: interpreters" field).
> […]
> However, Priority is a rather weak property of a package because it's
> typically applied via overrides, and it's easy for maintainers of
> third-party repositories to misconfigure them so that overrides aren't
> applied correctly. It shouldn't be ranked ahead of choosing packages
> from the native architecture. In this case, I have no user-mode
> emulation for arm64 set up, so choosing m4:arm64 simply won't work.
This effectly makes the priority the least interesting data point in
chosing a provider, which is in line with the other checks we have
already order above priority in the past and also has a certain appeal by
the soft irony it provides.
Closes: #718482
|
|
|
|
|
|
|
|
|
|
Git-Dch: Ignore
|
|
On CD-ROMs Translation-* files are only in compressed form included in
the Release file. This used to work while we had no record of
Translation-* files in the Release file at all as APT would have just
guessed the (compressed) filename and accepted it (unchecked), but now
that it checks for the presents of entries and if it finds records it
expects the uncompressed to be verifiable.
This commit relaxes this requirement again to fix the regression.
We are still secure "enough" as we can validate the compressed file we have
downloaded, so we don't loose anything by not requiring a hashsum for
the uncompressed files to double-check them.
Closes: 717665
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
StackPost
|
|
|
|
|
|
|
|
The code incorrectly skips printing of current version information,
if the package has no current version (for APT, but for dpkg as it is
the case for packages which are removed but not purged) by using an
unintended "else if" rather than an "if".
Closes: 717006
|
|
Multi-Arch: same packages can be co-installed, but need to have the same
version for all installed packages (aka "siblings"). Otherwise the
unsynced versions will fight against each other and the auto-install as
wel as the problem resolver will later have to decide between holding the
packages or to remove one of the siblings (usually a foreign) taking a
bunch of packages (like the entire foreign setup) with them.
The idea here is now to be more pro-active: MarkInstall will fail for
a package if the siblings aren't synced, so we don't allow a situation
in which a resolver has to decide if to hold or to remove-upgrade under
the assumption that the remove-upgrade decision is always wrong and
doesn't deserve to be explored (expect valid out-of-syncs of course).
Thats a pretty bold move to take for a library which is used by
different solvers so this check is done in IsInstallOk and can be
overridden if front-ends want to.
|
|
Default is to acquire all architectures from APT::Architectures which
can be changed by arch=, but this isn't very flexible if you want
"mostly" the default as you have to hardcode the architectures then,
so arch-= and arch+= can be used to add/remove architectures from the
default set.
On a machine with 'amd64' and 'i386' configured the lines:
deb [arch+=armel] http://example.org/debian wheezy rocks
deb [arch-=amd64] http://example.org/debian jessie rocks
will result in the download of:
wheezy Packages for 'amd64', 'i386' and 'armel'
jessie Packages for 'i386'
|
|
Adds on top of Version 2 to all displayed version numbers the
architecture as well as the MultiArch flag for consumption by the hooks.
Most of the time the architecture will be the same for both versions
displayed, but packages might change from "all" to "any" (or back)
between versions so we can't display the architecture for packages.
Pseudo-Format for Version 3:
<name> <version> <arch> <m-a-flag> <compare> <version> <arch> <m-a-flag>
Examples:
stuff - - none < 1 amd64 none **CONFIGURE**
libsame 1 i386 same < 2 i386 same **CONFIGURE**
stuff 2 i386 none > 1 i386 none **CONFIGURE**
libsame 2 i386 same > - - none **REMOVE**
toolkit 1 all foreign > - - none **REMOVE**
Closes: #712116
|
|
* apt-pkg/packagemanager.cc:
- increate APT::pkgPackageManager::MaxLoopCount to 5000
|
|
Conflicts:
debian/changelog
|
|
|
|
|
|
debug output of Debug::pkgDepCache::AutoInstall=true
|
|
|
|
GCC 4.8 is now clever enough to warn about:
contrib/sha2_internal.cc: In function ‘char* SHA256_End(SHA256_CTX*, char*)’:
contrib/sha2_internal.cc:656:31: warning: argument to ‘sizeof’ in ‘void*
memset(void*, int, size_t)’ call is the same expression as the destination;
did you mean to dereference it? [-Wsizeof-pointer-memaccess]
MEMSET_BZERO(context, sizeof(context));
So fix it as suggested. Its interesting though that the SHA2*
calculation as far as we need it works even without zeroing out.
Git-Dch: Ignore
|
|
fixup for 42d51f333e8ef522fed02cdfc48663488d56c3a3
The for-loop iterating over the DepIterators which need configuration
can (and will be in 'complicated' situations) run multiple times, so we
can't just GlobOr on the DepIterator as it modifies it, so that the next
iteration over the list ends up checking another dependency leading us
into a 'Internal error, packages left unconfigured. foopkg' maybe or we
are 'lucky' and calculate a solution which might break down the line
Git-Dch: Ignore
|
|
With the selfgrown splitting we got the problem of not recovering
from networks which just reply with invalid data like those sending
us login pages to authenticate with the network (e.g. hotels) back.
The good thing about the InRelease file is that we know that it must
be clearsigned (a Release file might or might not have a detached sig)
so if we get a file but are unable to split it something is seriously
wrong, so there is not much point in trying further.
The Acquire system already looks out for a NODATA error from gpgv,
so this adds a new error message sent to the acquire system in case
the splitting we do now ourselves failed including this magic word.
Closes: #712486
|
|
Before we download the 'new' InRelease file the old file will be moved
out of the way with the name 'foobar_InRelease.reverify', so if no
partial file for the 'new' file exists take the modification time from
this reverify file, so that if we get an IMS hit for the InRelease file
we can move back the reverify file as new file rather than downloading
the 'new' file even though we already have it.
We do the same for Release files and this happened to work until the
reverify renaming was corrected for InRelease files.
|
|
|
|
do not blindly assume that all packages stanzas have a "Description:"
field in 'apt-cache show' as well as in the cache creation itself.
We instead assume now that if the stanza has a Description, it will not
be the first field as we look out for "\nDescription" to take care of
MD5sum as well as (maybe ignored) translated Descriptions embedded in
the package stanza.
Closes: #712435
|
|
We do the same in the acquire system which handles the 'normal'
downloads, so do it here as well even though its unlikely anyone
will ever notice (beside testcases of course …)
|
|
Testing for global PendingErrors in users of CopyFile is incorrect
in so far as unrelated errors will prevent us from copying perfectly
fine files and checking for the validity of the files is just better
in CopyFiles as it already checks if files are at least opened.
Add also a higher-level error message to the error stack if it fails.
|
|
OpenInternDescriptor failures would cause additional errors to be
generated by double-closing an fd. Other errors (although these
are generated if the method is used incorrectly, so unlikely)
didn't close the fd aswell.
Closes: 704608
|
|
Previously some errors would set the Fail flag while some didn't
without a clear reason as all errors leave a bad FileFd behind,
so we use a helper now to ensure that all errors set the flag.
|
|
Git-Dch: Ignore
|
|
In the past packages were flagged "Protected" so that install/
remove markings where issued before the ProblemResolver.
Nowadays, the marking methods check if they are allowed to modify
the marking of a package instead, so that markings set by FromUser
calls are not overwritten anymore by automatic calls which elimates
the need for InstallProtect which just eats CPU now.
The method itself is left untouched for now in case frontend needs it
still for some wierd usecase, but they should be eliminated.
|
|
Splits the big loop over dependencies in SmartConfigure which unpacks and
configures dependencies into two loops and reverse their order, so that all
dependencies which need to be unpacked are handled first and only after that
configures are issued for dependencies.
This is needed as otherwise the unpack of a (new) dependency will be issued
in between a configure call for two (or more) packages which form a loop,
which means the configure calls aren't part of the same dpkg call and
therefore dpkg bails out.
Such tight loops should really be avoided as they are usually wrong – and in
reality the dependencies in libreoffice were greatly simplified thanks to
Rene Engelhard so the problem is gone for the benefit of all.
Closes: 707578
|
|
|
|
Used to work until a certain (here unnamed) person came along and used
the wrong operator causing low-priority packages to be sorted above
high-priority packages while choosing a provider in commit
2b5c35c7bb915dbd46fefd7c79f05364ba22f93b from Nov 2011
|
|
Doing Removes early is good to have them out of the way, so they
don't break 'Inst' or 'Conf' chains, but scoring them above Essentials
means that we end up upgrading (many) less important packages before
we handle big stuff like libc6 or debconf which not only fails if those
less important packages have unannounced (strict) dependencies, but also
leads to having these packages unconfigured for a long time triggering
bugs in maintainer scripts for no good reason (#708831).
So this commits sets the default value for remove scores to 100, which
is below the one for essentials (200) and a lot lower than the previous
default value (500).
|
|
Some squeeze → wheezy upgrades indicate that DepRemove runs amok
in complicated setups as it wasn't correctly working with or-groups.
Completely rewritten the check is now moving from or-group to or-group
instead.
The behavior should be the same as the code before, but
(hopefully) with less bugs and more comments.
Closes: 645713
|
|
|