Age | Commit message (Collapse) | Author |
|
Our old idea was to look for the first package which would be "touched"
and take this as the package dpkg is talking about, but that is
incorrect in complicated situations like a package upgraded to/from
multiple M-A:same siblings installed.
As we us the progress report to decide what is still needed we have to
be reasonabily right about the package dpkg is talking about, so we jump
to quite a few loops to get it.
(cherry picked from commit 4b10240cca0dc0a4e82e42959545d2ae7e622d29)
|
|
Given that we use the progress information to skip over actions dpkg has
already done like not purging a package which was already removed and
had no config files or not acting on disappeared packages and such it is
important that apt and dpkg agree on which states the package has to
pass through.
To ensure that we keep tabs on this in the future a warning is added at
the end if apt hasn't seen all the action it was supposed to see. I
can't wait for the first bugreporters to wonder about this…
(cherry picked from commit dabe9e2482180ada77d2adda2b3c03db22059fb8)
|
|
If a package is triggered dpkg frequently issues two messages about it
causing us to make a note about it both times which messes up our
planned dpkg actions view. Adding these actions if we have nothing else
planned fixes this and should still be correct as those planned actions
will deal with the triggering just fine and we avoid strange problems
like a package triggered before its removed…
(cherry picked from commit 066d4a5bab628ef8220971bb5763ff8f3a13de07)
|
|
[Comment from commiter:] I have the feeling that the issue itself is
fixed for a while already as nowadays we have testcases involving a
webserver closing the connection on error (look for "closeOnError") and
no even remotely recent reports about it, but moving the content
clearance above the failure report is a valid change and shouldn't hurt.
Closes: #465572
(cherry picked from commit 324bb34d77a43d1be411c402b2e11f588194439a)
|
|
apt tools do not really support these other variables, but tools apt
calls might, so lets play save and clean those up as needed.
Reported-By: Paul Wise (pabs) on IRC
(cherry picked from commit e2c8c825a5470e33c25d00e07de188d0e03922c8)
|
|
We can't cleanup the environment like e.g. sudo would do as you usually
want the environment to "leak" into these helpers, but some variables
like HOME should really not have still the value of the root user – it
could confuse the helpers (USER) and HOME isn't accessible anyhow.
Closes: 842877
(cherry picked from commit 34b491e735ad47c4805e63f3b83a659b8d10262b)
|
|
A user relying on the deprecated behaviour of apt-get to accept a source
with an unknown pubkey to install a package containing the key expects
that the following 'apt-get update' causes the source to be considered
as trusted, but in case the source hadn't changed in the meantime this
wasn't happening: The source kept being untrusted until the Release file
was changed.
This only effects sources not using InRelease and only apt-get, the apt
binary downright refuses this course of actions, but it is a common way
of adding external sources.
Closes: 838779
(cherry picked from commit 84eec207be35b8c117c430296d4c212b079c00c1)
LP: #1657440
|
|
In effect this is an extension of the 6 years old commit
a8dfff90aa740889eb99d00fde5d70908d9fd88a which uses the autoremover to
remove packages again from the solution which are no longer needed to be
there. Commonly these are dependencies of packages we end up not
installed due to problem resolver decisions. Slightly less common is
the situation we deal with here: a package which we wanted to upgrade
sporting a new dependency, but ended up holding back.
The problem is that all versions of an installed reverse dependencies can
bring back a "garbage" package – we need to do this as there is
nothing inherently wrong in having garbage packages installed or upgrade
them, which itself would have garbage dependencies, so just blindly
killing all new garbage packages would prevent the upgrade (and actually
generate errors). What we should be doing is looking only at the version
we will have on the system, disregarding all old/new reverse dependencies.
Reported-By: Stuart Prescott (themill) on IRC
(cherry picked from commit 952171787a0b865c17d5c9476e272106383ae93a)
|
|
|
|
This prevents CI failures from happening in 1.3 and 1.2 and
might actually be more complete.
Gbp-Dch: ignore
(cherry picked from commit 803dabde5a4345ce83b3d2ffbd475786db9769d9)
|
|
Curl requires URLs to be urlencoded. We are however giving it
undecoded URLs. This causes it go completely nuts if there is
a space in the URI, producing requests like:
GET /a file HTTP/1.1
which the servers then interpret as a GET request for "/a" with
HTTP version "file" or some other non-sense.
This works around the issue by encoding the path component of
the URL. I'm not sure if we should encode other parts of the URL
as well, this one seems to do the trick for the actual issue at
hand.
A more correct fix is to avoid the dequoting and (re-)quoting
of URLs when a redirect occurs / a new request is sent. That's
been on the radar for probably a year or two now, but nobody
bothered implementing that yet.
LP: #1651923
(cherry picked from commit 994515e689dcc5f963f5fed58284831750a5da03)
|
|
|
|
This unit runs unattended-upgrades. If apt itself is part of the
upgrade a restart of the unit will kill unattended-upgrades. This
will lead to an inconsistent dpkg status.
Closes: #841763
Thanks: Alexandre Detiste
(cherry picked from commit e133bb5e81b10bf059b3abeab2d9e41f7206e446)
LP: #1649959
|
|
|
|
This is a follow up to the previous issue where we did not check
if getline() returned -1 due to an end of file or due to an error
like memory allocation, treating both as end of file.
Here we ensure that we also handle buffered writes correctly by
flushing the files before checking for any errors in our error
stack.
Buffered writes themselves were introduced in 1.1.9, but the
function was never called with a buffered file from inside
apt until commit 46c4043d741cb2c1d54e7f5bfaa234f1b7580f6c
which was first released with apt 1.2.10. The function is
public, though, so fixing this is a good idea anyway.
Affected: >= 1.1.9
(cherry picked from commit 6212ee84a517ed68217429022bd45c108ecf9f85)
|
|
This fixes a security issue where signatures of the
InRelease files could be circumvented in a man-in-the-middle
attack, giving attackers the ability to serve any packages
they want to a system, in turn giving them root access.
It turns out that getline() may not only return EINVAL
as stated in the documentation - it might also return
in case of an error when allocating memory.
This fix not only adds a check that reading worked
correctly, it also implicitly checks that all writes
worked by reporting any other error that occurred inside
the loop and was logged by apt.
Affected: >= 0.9.8
Reported-By: Jann Horn <jannh@google.com>
Thanks: Jann Horn, Google Project Zero for reporting the issue
LP: #1647467
(cherry picked from commit 51be550c5c38a2e1ddfc2af50a9fab73ccf78026)
|
|
|
|
Gbp-Dch: ignore
|
|
|
|
We always want to run codecov test, even if there are spurious
failures. We should really work around those failures more, though,
it is starting to annoy me.
|
|
Closes: #838731
|
|
|
|
This fixes a regression introduced in
commit 8f858d560e3b7b475c623c4e242d1edce246025a
don't leak FD in AutoProxyDetect command return parsing
which accidentally made the proxy autodetection code also read
the scripts output on stderr, not only on stdout when it switched
the code from popen() to Popen().
Reported-By: Tim Small <tim@seoss.co.uk>
|
|
|
|
If the dependency line does not contain spaces in the repository
but does in the dpkg status file (because dpkg normalized the
dependency list), the dpkg line might be longer than the line
in the repository. If it now happens to be longer than 1024
characters, it would be skipped, causing the hashes to be
out of date.
Note that we have to bump the minor cache version again as
this changes the format slightly, and we might get mismatches
with an older src cache otherwise.
Fixes Debian/apt#23
|
|
This allows fully automated code coverage testing, which is
basically awesome. To allow the methods and solvers and stuff
which run as _apt to write to our build directory, we need to
adjust the permissions a bit, but otherwise it's OK.
Gbp-Dch: ignore
|
|
We need to ignore messages from gcov. All those messages
start with profiling: and are printed using vfprintf(), so
the only thing we can do is add a library overriding those
functions and linking apt-pkg to it.
|
|
This allows us to easily test coverage
|
|
This cleans up the output a bit, it should also improve performance,
but unfortunately, this does not really seem to be the case.
Gbp-Dch: ignore
|
|
Even if we only configure a single architecture, install dpkg, so
dpkg can assert multi arch correctly. This also has the nice side
effect of making single architecture and multiple architecture
test cases more uniform.
This fixes a regression from f878d3a862128bc1385616751ae1d78246b1bd01
("test: Assert multi-arch in the chroot").
|
|
If we copied one of the existing status files, we might not have
a trailing newline, so let's add one.
Gbp-Dch: ignore
|
|
Commit b60c8a89c281f2bb945d426d2215cbf8f5760738 improved the situation,
but due to inconsistency mostly for planners, not for solvers. As the
idea of hiding errors if we show another error is a bit scary (as the
extern error might be a followup of our intern error, rather than the
reason for our intern error as it is at the moment) we don't discard the
errors, but if we got an extern error we show them directly removing
them from the error list at the end of the run – that list will contain
the extern error which hopefully gives us the best of both worlds.
The problem itself is the same as before: The externals exiting before
apt is done talking to them.
Reported-By: Johannes 'josch' Schauer on IRC
|
|
Commit 3af3ac2f5ec007badeded46a94be2bd06b9917a2 (released in 1.3~pre1)
implements proper fallback for SRV, but that works actually too good
as the RFC defines that such an SRV record should indicate that the
server doesn't provide this service and apt should respect this.
The solution is hence to fail again as requested even if that isn't what
the user (and perhaps even the server admins) wanted. At least we will
print a message now explicitly mentioning SRV to point people in the
right direction.
Reported-In: https://bugs.kali.org/view.php?id=3525
Reported-By: Raphaël Hertzog
|
|
Gbp-Dch: Ignore
|
|
|
|
Normally make just lets every job write its output directly,
making the log fairly hard to read with high concurrency.
|
|
dpkg on overlayfs cannot rename apt/script to apt, because overlayfs
will not let it move apt to a backup name, responding with XDEV
instead.
|
|
Employ a priority queue instead of a normal queue to hold
the items; and only add items to the running pipeline if
their priority is the same or higher than the priority
of items in the queue.
The priorities are designed for a 3 stage pipeline system:
In stage 1, all Release files and .diff/Index files are fetched. This
allows us to determine what files remain to be fetched, and thus
ensures a usable progress reporting.
In stage 2, all Pdiff patches are fetched, so we can apply them
in parallel with fetching other files in stage 3.
In stage 3, all other files are fetched (complete index files
such as Contents, Packages).
Performance improvements, mainly from fetching the pdiff patches
before complete files, so they can be applied in parallel:
For the 01 Sep 2016 03:35:23 UTC -> 02 Sep 2016 09:25:37 update
of Debian unstable and testing with Contents and appstream for
amd64 and i386, update time reduced from 37 seconds to 24-28
seconds.
Previously, apt would first download new DEP11 icon tarballs
and metadata files, causing the CPU to be idle. By fetching
the diffs in stage 2, we can now patch our contents and Packages
files while we are downloading the DEP11 stuff.
|
|
This accidentally used ICONV_DIRECTORIES, which does not
even exist. Weird.
|
|
If a non-existing source directory is specified, try finding
the system gtest library. Debian derived distributions are
a bit strange because they only ship the source code and
not the library...
|
|
I switched them over to generated files in commit
9fb81c6e54a2fe05c0ad0b877fd32f30358e3877, but forgot
to add them to the ignore file.
Gbp-Dch: ignore
|
|
In gpgv1 GOODSIG (and the other messages of status-fd) are documented as
sending the long keyid. In gpgv2 it is documented to be either long
keyid or the fingerprint. At the moment it is still the long keyid, but
the documentation hints at the possibility of changing this.
We care about this for Signed-By support as we detect this way if the
right fingerprint has signed this file (or not). The check itself is
done via VALIDSIG which always is a fingerprint, but there must also be
a GOODSIG (as expired sigs are valid, too) found to be accepted which
wouldn't be found in the fingerprint-case and the signature hence
refused.
|
|
The recently added (increased actually) Breaks were accidently dropped
while our set of mostly old and outdated breaks was cleaned up.
Regression-From: 20d2f4a4f164cd9026dad698e471c95d7c28973b
Previously-Add-In: ab07af708e49c9219940ffd3e20a01c763267e03
Closes: #836220
|
|
Gbp-Dch: Ignore
Reported-By: gcc -Wmissing-declarations
|
|
memcpy is marked as nonnull for its input, but ignores the input anyhow
if the declared length is zero. Our SHA2 implementations do this as
well, it was "just" MD5 and SHA1 missing, so we add the length check
here as well as along the callstack as it is really pointless to do all
these method calls for "nothing".
Reported-By: gcc -fsanitize=undefined
|
|
gpg annoyingly changed its output and broke our test suite
again by adding two extra lines about key type and issuer.
Really annoying.
Those lines also have more than one space after the colon,
so let's use \s* there - and also change the other lines to
support variable length whitespace in case gpg decides to
break things there too.
|
|
If the inner Base256ToNum() returned false, it did not set
Num to a new value, causing it to be uninitialized, and thus
might have caused the function to exit despite a good result.
Also document why the Res = Num, if (Res != Num) magic is done.
Reported-By: valgrind
|
|
Adding 1 to the value of d->End - current makes restLength one byte
too long: If we pass memchr(current, ..., restLength) has thus
undefined behavior.
Also, reading the value of current has undefined behavior if
current >= d->End, not only for current > d->End:
Consider a string of length 1, that is d->End = d->Current + 1.
We can only read at d->Current + 0, but d->Current + 1 is beyond
the end of the string.
This probably caused several inexplicable build failures on hurd-i386
in the past, and just now caused a build failure on Ubuntu's amd64
builder.
Reported-By: valgrind
|
|
I actually tried to amend the previous commit, but apparently
I forgot to add the file mode change.
Gbp-Dch: ignore
|
|
If a Binary field contains one or more spaces before a comma, the
code produced a segmentation fault, as it accidentally set a pointer
to 0 instead of the value of the pointer.
If the comma is at the beginning of the field, the code would
create a binStartNext that points one element before the start
of the string, which is undefined behavior.
We also need to check that we do not exit the string during the
replacement of spaces before commas: A string of the form " ,"
would normally exit the boundary of the Buffer:
binStartNext = offset 1 ','
binEnd = offset 0 ' '
isspace_ascii(*binEnd) = true => --binEnd
=> binEnd = - 1
We get rid of the problem by only allowing spaces to be eliminated
if they are not the first character of the buffer:
binStartNext = offset 1 ','
binEnd = offset 0 ' '
binEnd > buffer = false, isspace_ascii(*binEnd) = true
=> exit loop
=> binEnd remains 0
|