Age | Commit message (Collapse) | Author |
|
Our old idea was to look for the first package which would be "touched"
and take this as the package dpkg is talking about, but that is
incorrect in complicated situations like a package upgraded to/from
multiple M-A:same siblings installed.
As we us the progress report to decide what is still needed we have to
be reasonabily right about the package dpkg is talking about, so we jump
to quite a few loops to get it.
(cherry picked from commit 4b10240cca0dc0a4e82e42959545d2ae7e622d29)
|
|
Given that we use the progress information to skip over actions dpkg has
already done like not purging a package which was already removed and
had no config files or not acting on disappeared packages and such it is
important that apt and dpkg agree on which states the package has to
pass through.
To ensure that we keep tabs on this in the future a warning is added at
the end if apt hasn't seen all the action it was supposed to see. I
can't wait for the first bugreporters to wonder about this…
(cherry picked from commit dabe9e2482180ada77d2adda2b3c03db22059fb8)
|
|
If a package is triggered dpkg frequently issues two messages about it
causing us to make a note about it both times which messes up our
planned dpkg actions view. Adding these actions if we have nothing else
planned fixes this and should still be correct as those planned actions
will deal with the triggering just fine and we avoid strange problems
like a package triggered before its removed…
(cherry picked from commit 066d4a5bab628ef8220971bb5763ff8f3a13de07)
|
|
apt tools do not really support these other variables, but tools apt
calls might, so lets play save and clean those up as needed.
Reported-By: Paul Wise (pabs) on IRC
(cherry picked from commit e2c8c825a5470e33c25d00e07de188d0e03922c8)
|
|
We can't cleanup the environment like e.g. sudo would do as you usually
want the environment to "leak" into these helpers, but some variables
like HOME should really not have still the value of the root user – it
could confuse the helpers (USER) and HOME isn't accessible anyhow.
Closes: 842877
(cherry picked from commit 34b491e735ad47c4805e63f3b83a659b8d10262b)
|
|
A user relying on the deprecated behaviour of apt-get to accept a source
with an unknown pubkey to install a package containing the key expects
that the following 'apt-get update' causes the source to be considered
as trusted, but in case the source hadn't changed in the meantime this
wasn't happening: The source kept being untrusted until the Release file
was changed.
This only effects sources not using InRelease and only apt-get, the apt
binary downright refuses this course of actions, but it is a common way
of adding external sources.
Closes: 838779
(cherry picked from commit 84eec207be35b8c117c430296d4c212b079c00c1)
LP: #1657440
|
|
This is a follow up to the previous issue where we did not check
if getline() returned -1 due to an end of file or due to an error
like memory allocation, treating both as end of file.
Here we ensure that we also handle buffered writes correctly by
flushing the files before checking for any errors in our error
stack.
Buffered writes themselves were introduced in 1.1.9, but the
function was never called with a buffered file from inside
apt until commit 46c4043d741cb2c1d54e7f5bfaa234f1b7580f6c
which was first released with apt 1.2.10. The function is
public, though, so fixing this is a good idea anyway.
Affected: >= 1.1.9
(cherry picked from commit 6212ee84a517ed68217429022bd45c108ecf9f85)
|
|
This fixes a security issue where signatures of the
InRelease files could be circumvented in a man-in-the-middle
attack, giving attackers the ability to serve any packages
they want to a system, in turn giving them root access.
It turns out that getline() may not only return EINVAL
as stated in the documentation - it might also return
in case of an error when allocating memory.
This fix not only adds a check that reading worked
correctly, it also implicitly checks that all writes
worked by reporting any other error that occurred inside
the loop and was logged by apt.
Affected: >= 0.9.8
Reported-By: Jann Horn <jannh@google.com>
Thanks: Jann Horn, Google Project Zero for reporting the issue
LP: #1647467
(cherry picked from commit 51be550c5c38a2e1ddfc2af50a9fab73ccf78026)
|
|
This fixes a regression introduced in
commit 8f858d560e3b7b475c623c4e242d1edce246025a
don't leak FD in AutoProxyDetect command return parsing
which accidentally made the proxy autodetection code also read
the scripts output on stderr, not only on stdout when it switched
the code from popen() to Popen().
Reported-By: Tim Small <tim@seoss.co.uk>
|
|
If the dependency line does not contain spaces in the repository
but does in the dpkg status file (because dpkg normalized the
dependency list), the dpkg line might be longer than the line
in the repository. If it now happens to be longer than 1024
characters, it would be skipped, causing the hashes to be
out of date.
Note that we have to bump the minor cache version again as
this changes the format slightly, and we might get mismatches
with an older src cache otherwise.
Fixes Debian/apt#23
|
|
We need to ignore messages from gcov. All those messages
start with profiling: and are printed using vfprintf(), so
the only thing we can do is add a library overriding those
functions and linking apt-pkg to it.
|
|
Commit b60c8a89c281f2bb945d426d2215cbf8f5760738 improved the situation,
but due to inconsistency mostly for planners, not for solvers. As the
idea of hiding errors if we show another error is a bit scary (as the
extern error might be a followup of our intern error, rather than the
reason for our intern error as it is at the moment) we don't discard the
errors, but if we got an extern error we show them directly removing
them from the error list at the end of the run – that list will contain
the extern error which hopefully gives us the best of both worlds.
The problem itself is the same as before: The externals exiting before
apt is done talking to them.
Reported-By: Johannes 'josch' Schauer on IRC
|
|
Employ a priority queue instead of a normal queue to hold
the items; and only add items to the running pipeline if
their priority is the same or higher than the priority
of items in the queue.
The priorities are designed for a 3 stage pipeline system:
In stage 1, all Release files and .diff/Index files are fetched. This
allows us to determine what files remain to be fetched, and thus
ensures a usable progress reporting.
In stage 2, all Pdiff patches are fetched, so we can apply them
in parallel with fetching other files in stage 3.
In stage 3, all other files are fetched (complete index files
such as Contents, Packages).
Performance improvements, mainly from fetching the pdiff patches
before complete files, so they can be applied in parallel:
For the 01 Sep 2016 03:35:23 UTC -> 02 Sep 2016 09:25:37 update
of Debian unstable and testing with Contents and appstream for
amd64 and i386, update time reduced from 37 seconds to 24-28
seconds.
Previously, apt would first download new DEP11 icon tarballs
and metadata files, causing the CPU to be idle. By fetching
the diffs in stage 2, we can now patch our contents and Packages
files while we are downloading the DEP11 stuff.
|
|
This accidentally used ICONV_DIRECTORIES, which does not
even exist. Weird.
|
|
memcpy is marked as nonnull for its input, but ignores the input anyhow
if the declared length is zero. Our SHA2 implementations do this as
well, it was "just" MD5 and SHA1 missing, so we add the length check
here as well as along the callstack as it is really pointless to do all
these method calls for "nothing".
Reported-By: gcc -fsanitize=undefined
|
|
If the inner Base256ToNum() returned false, it did not set
Num to a new value, causing it to be uninitialized, and thus
might have caused the function to exit despite a good result.
Also document why the Res = Num, if (Res != Num) magic is done.
Reported-By: valgrind
|
|
Adding 1 to the value of d->End - current makes restLength one byte
too long: If we pass memchr(current, ..., restLength) has thus
undefined behavior.
Also, reading the value of current has undefined behavior if
current >= d->End, not only for current > d->End:
Consider a string of length 1, that is d->End = d->Current + 1.
We can only read at d->Current + 0, but d->Current + 1 is beyond
the end of the string.
This probably caused several inexplicable build failures on hurd-i386
in the past, and just now caused a build failure on Ubuntu's amd64
builder.
Reported-By: valgrind
|
|
If a Binary field contains one or more spaces before a comma, the
code produced a segmentation fault, as it accidentally set a pointer
to 0 instead of the value of the pointer.
If the comma is at the beginning of the field, the code would
create a binStartNext that points one element before the start
of the string, which is undefined behavior.
We also need to check that we do not exit the string during the
replacement of spaces before commas: A string of the form " ,"
would normally exit the boundary of the Buffer:
binStartNext = offset 1 ','
binEnd = offset 0 ' '
isspace_ascii(*binEnd) = true => --binEnd
=> binEnd = - 1
We get rid of the problem by only allowing spaces to be eliminated
if they are not the first character of the buffer:
binStartNext = offset 1 ','
binEnd = offset 0 ' '
binEnd > buffer = false, isspace_ascii(*binEnd) = true
=> exit loop
=> binEnd remains 0
|
|
Apparently we had no default defined for this.
Reported-By: David Kalnischkies
|
|
This accidentally had two apt in it. This fixes a regression
from commit 8757a0f.
Gbp-Dch: ignore
|
|
An absolute filename for a *.deb file starts with a /. A package with
the name of the file is inserted in the cache which is provided by the
"real" package for internal reasons. The pinning code detects a regex
based wildcard by having the regex start with /. That is no problem
as a / can not be included in a package name… expect that our virtual
filename package can and does.
We fix this two ways actually: First, a regex is only being considered a
regex if it also ends with / (we don't support flags). That stops our
problem with the virtual filename packages already, but to be sure we
also do not enter the loop if matcher and package name are equal.
It has to be noted that the creation of pins for virtual packages like
the here effected filename packages is pointless as only versions can be
pinned, but checking that a package is really purely virtual is too
costly compared to just creating an unused pin.
Closes: 835818
|
|
Without randomizing the order in which we download the index files we
leak needlessly information to the mirrors of which architecture is
native or foreign on this system. More importantly, we leak the order in
which description translations will be used which in most cases will e.g.
have the native tongue first.
Note that the leak effect in practice is limited as apt detects if a file
it wants to download is already available in the latest version from a
previous download and does not query the server in such cases. Combined
with the fact that Translation files are usually updated infrequently
and not all at the same time, so a mirror can never be sure if it got asked
about all files the user wants.
|
|
|
|
FreeBSD has two iconv systems: It ships an iconv.h itself,
and symbols for that in the libc. But there's also the port
of GNU libiconv, which unfortunately for us, Doxygen depends
on.
This changes things to prefer a separate libiconv library
over the system one; that is, the port on FreeBSD.
Gbp-Dch: ignore
|
|
This is needed on BSD where root's default group is wheel, not
root.
|
|
This fixes issues with chroots, but the goal here was to get
the test suite working on systems without dpkg.
|
|
This allows other vendors to use different paths, or to build
your own APT in /opt for testing. Note that this uses + 1 in
some places, as the paths we receive are absolute, but we need
to strip of the initial /.
|
|
The C.UTF-8 locale is not portable, so we need to use C, otherwise
we crash on other systems. We can use std::locale::classic() for
that, which might also be a bit cheaper than using locale("C").
|
|
Gbp-Dch: ignore
|
|
Does not exist on FreeBSD
Gbp-Dch: ignore
|
|
Several modules use std::array without including the
array header. Bad modules.
Some modules use STDOUT_FILENO and friends, or close()
without including unistd.h, where they are defined.
One module also uses WIFEXITED() without including
sys/wait.h.
Finally, environ is not specified to be defined in unistd.h. We
are required to define it ourselves according to POSIX, so let's
do that.
|
|
Ubuntu uses *.ddeb files for their debug packages, but the interface we
are using since f495992428a396e0f98886c9a761a804aa161c68 to talk to dpkg
isn't supporting *.ddeb files. This used to work previously as apt itself
isn't caring about the filenames at all and if they are explicitly
mentioned dpkg will accept all, too.
It might or might not be a good idea to patch dpkg, too, but regardless
of it happening, we don't want to couple us to closely to dpkg for this
minor feature but testing for this at runtime as it would delay shipping
the fix for the too long commandlines further.
It is also questionable if it is really a good idea to allow any file
extension to be used here (like .foobar in the testcase), but we used to
and we tend to avoid breaking existing usecases if we can help it.
As a bonus, this also allows the installation of ddeb files directly
from the commandline as you can with deb files already. We continue to
ignore udeb through as the user-mistake to useful ratio is too high.
LP: #1616909
|
|
In most cases apt was already skipping the (re)setting of packages as
to be removed/purged if dpkg had told us that it already did, but we
haven't dealt with it in the most obvious of the cases: Selections set
for packages we touched in this operation which either restores
selections even dpkg would have overridden or e.g. tries to restore a
purge selection for a package which was just purged – does not happen
with apt itself as it isn't using selections in this way, but higher
frontends like aptitude do.
The result in the later case is a warning printed by dpkg that we try to
set selections for an unknown package, which is harmless per se, but can
be confusing for users and we really shouldn't cause warnings in dpkg if
we can help it.
Reported-By: Guillem Jover on IRC
|
|
Improve-Upon: 2e2865ae53a65c00dd55a892d5b48458f3110366
Reported-By: Julian Andres Klode
Gbp-Dch: Ignore
|
|
The bugreport shows a segfault caused by the code not doing the correct
magical dance to remove an item from inside a queue in all cases. We
could try hard to fix this, but it is actually better and also easier to
perform these checks (which cause instant failure) earlier so that they
haven't entered queue(s) yet, which in return makes cleanup trivial.
The result is that we actually end up failing "too early" as if we
wouldn't be careful download errors would be logged before that process
was even started. Not a problem for the acquire system, but likely to
confuse users and programs alike if they see the download process
producing errors before apt was technically allowed to do an acquire
(it didn't, so no violation, but it looks like it to the untrained eye).
Closes: 835195
|
|
We basically called ourselves before, creating an endless loop.
Reported-By: clang
|
|
This time it is the formatting of floating numbers in progress
reporting with a radix charater potentially not being dot.
Followup of 7303e11ff28f920a6277c159aa46f80c007350bb. Regression of
b58e2c7c56b1416a343e81f9f80cb1f02c128e25 in so far as it exchanging
very effected with slightly less effected code.
LP: 1611010
|
|
Commit 7ec343309b7bc6001b465c870609b3c570026149 got us most of the way,
but the last mile was botched by having the pending calls in the wrong
order as this way we potentially 'force' dpkg to remove/purge a package
it doesn't want to as another package still depends on it and the
replacement isn't fully installed yet.
So what we do now is a configure before remove and purge (all with
--no-triggers) and finishing off with another configure pending call to
take care of the triggers.
Note that in the bugreport example our current planner is forcing dpkg
to remove the package earlier via --force-depends which we could do for
the pending calls as well and could be used as a workaround, but we want
to do less forcing eventually.
Closes: 835094
|
|
This fixes some actual bugs for PROJECT and BZIP2_INCLUDE_DIR.
Gbp-Dch: ignore
|
|
Instead of erroring out when receiving a SIGINT, let the
child deal with it - we'll error out anyway if the child
exits with an error or due to the signal. Also ignore
SIGQUIT, as system() ignores it.
This basically fixes Bug #832593, but: we are running
the hooks via sh -c. Some shells exit with a signal
error even if the command they are executing catches
the signal and exits successfully. So far, this has
been noticed on dash, which unfortunately, is our
default shell.
Example:
$ cat trap.sh
trap 'echo int' INT; sleep 10; exit 0
$ if dash -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
FAIL: 130
$ if mksh -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
OK: 0
$ if bash -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
OK: 0
|
|
Reported-By: Mattia Rizzolo <mattia@debian.org> in #834629
|
|
We support "./foobar.deb" as a way to install a deb file directly.
Recently .changes files were added. This highlights a problem as you
can't add the changes file without also trying to install all of them.
Now, it could also be handy to add entire Packages/Sources files to
perhaps get a bunch of packages in without installing them all
implicitly.
This commit introduces --with-source which allows to add *.deb, *.changes,
*.dsc, source-dirs, Packages & Sources files (the later can also be
compressed) without also installing them.
|
|
Seen in cme #833656 if Dir isn't set (yet) we end up later absoluting a
path which was supposed to be absolute already, so if Dir is empty we
assume it to be '/' instead. In practice this is a bug in the software
using libapt, but for maxium compatibility lets explicitly set the
default value here to be safe.
Reported-By: Paul Wise <pabs@debian.org>
Inspired-By: Brendan O'Dea <bod@debian.org>
Fixes-Regression: 475f75506db48a7fa90711fce4ed129f6a14cc9a
Shadows-Bug: #833656
|
|
In af81ab9030229b4ce6cbe28f0f0831d4896fda01 by-hash got implemented as a
special compression type for our usual index files like Packages.
Missing in this scheme was the special .diff/Index index file containing
the info about individual patches for this index file. Deriving from the
index file class directly we inherent the compression handling
infrastructure and in this way also by-hash nearly for free.
Closes: #824926
|
|
The URI we later want to modify to get the file via by-hash was unset in
case a file was only available uncompressed (which is usually not the
case) causing an acquire error.
|
|
In af81ab9030229b4ce6cbe28f0f0831d4896fda01 we implement by-hash as a
special compression type, which breaks this filesize setting as the code
is looking for a foobar.by-hash file then. Dealing this slightly gets
us the intended value. Note that this has no direct effect as this value
will be set in other ways, too, and could only effect progress reporting.
Gbp-Dch: Ignore
|
|
If 9b8034a9fd40b4d05075fda719e61f6eb4c45678 serves the Release files
from a partial mirror we will end up getting 404 for some of the
indexes. Instead of giving up, we will instead ignore our same
redirection mirror constrain and ask the redirection service as a
potential hashsum mismatch is better than keeping the certain 404 error.
|
|
Now that we have the redirections loopchecker centrally in our items we
can use it also to prevent internal redirections to loop caused by
bugs as in a few instances we get into the business of rewriting the URI
we will query by ourself as we predict we would see such a redirect
anyway. Our code has no bugs of course, hence no practical difference. ;)
Gbp-Dch: Ignore
|
|
The failure handling frequently changes URI & Description of the failed
item to try a slightly different combination which might work, but the
logging of the failure happens only afterwards as the same failure
handling decides if this is a critical error or not so we need a backup
here instead of potentially new content.
A purely cosmetic issue, but can still be confusing for humans.
|
|
Since its existence in 2010 DirectoryExists was always marked with this
attribute, but for no real reason. Arguably a check for the existence of
the file is not modifying global state, so theoretically this shouldn't
be a problem. It is wrong from a logical point of view through as
between two calls the directory could be created so the promise we made
to the compiler that it could remove the second call would be wrong, so
API wise it is wrong.
It's a bit mysterious that this is only observeable on ppc64el and can be
fixed by reordering code ever so slightly, but in the end its more our
fault for adding this attribute than the compilers fault for doing
something silly based on the attribute.
LP: 1473674
|