Age | Commit message (Collapse) | Author |
|
Our implementation of wildcards was rudimentary. It worked for some
common ones, but it was also broken: For example, armel matched any-armel,
but should match any-arm.
With this commit, we load the correct tables from dpkg. Supported are
both triplets and quadruplet tables (the latter introduced in dpkg 1.18.11).
There are some odd things we have to deal with in the cache filter for
historical and API reasons:
* The character "*" must be accepted as an alternative to any - in fact
it may appear anywhere in the wildcard as we also allow fnmatch() style
wildcard matching on the commandline.
* The code might get passed an arch with a minus at the end, for example
the cmdline "install apt:any-arm-" will first try to check if any-arm-
is a valid architecture. We deal with this by rejecting any wildcard
ending in a minus.
* Triplets are actually implemented by extending them to faux quadruplets
- by prepending a "base" component for the architecture tuple, and "any"
if there is a wildcard component.
Once we have constructed a wildcard, it is transformed into an fnmatch()
expression for historical reasons. In the future, we should really get a
tuple class and implement matching in a better, more explicit way.
This does for now though - it passes all the test cases and accepts all
things it should accept.
Closes: #748936
Thanks: James Clarke <jrtc27@jrtc27.com> for the initial patch
|
|
Thanks: James Clarke <jrtc27@jrtc27.com> for the implementation
Gbp-Dch: ignore
|
|
This is useful for e.g. Britney, where the Build-Depends would have to
be parsed for multiple architectures. With this change, the call can
choose the architecture without having to mess with the config.
Signed-off-by: Niels Thykier <niels@thykier.net>
Closes: #845969
(jak@d.o: made the code compile)
|
|
The idea is simple: Each¹ Find*( call starts with a call check if the
given option (with the requested type) exists in the whitelist. The
whitelist is specified via our configure-index file so that we have
a better chance at keeping it current. the whitelist is loaded via a
special (undocumented for now) configuration stanza and if none is
loaded the empty whitelist will make it so that no warnings are shown.
Much needs to be done still, but that is as good a time as any to take a
snapshot of the current state and release it into the wild given that it
found some bugs already and has no practical effect on users.
¹ not all in this iteration, but many
|
|
Interpreting a boolean as an int works just fine – it just hasn't the
intended result – it isn't a serious problem through as the disabling of
the usage of this dpkg calling style is just an "optimization"
|
|
Again no practical difference, but for consistency a boolean option
should really be accessed via a boolean method rather than an int
especially if you happen to try setting the option to "true" …
Gbp-Dch: Ignore
|
|
This can happen e.g. for file: repositories. There is no inherent
problem with setting such values internally, but its bad style,
forbidden in the manpage and could be annoying in the future.
Gbp-Dch: Ignore
|
|
Unlikely to have any practical effect, but its more consistent to use
the right methods instead of performing it slightly incorrect by hand.
Gbp-Dch: Ignore
|
|
The crude way of preparing a message to be a multiline value failed at
generation valid deb822 in case the error message ended with a new line
like the resolving errors from apt do. apt itself can parse these, but
other tools like grep-dctrl choke on it, so be nice and print valid.
Reported-By: Johannes 'josch' Schauer on IRC
|
|
Any respective parser will do the right thing and grab the last value,
but its better for style to generate that field only once.
Gbp-Dch: Ignore
|
|
Clearsigned files like InRelease, .dsc, .changes and co can potentially
include unsigned or additional messages blocks ignored by gpg in
verification, but a potential source of trouble in our own parsing
attempts – and an unneeded risk as the usecases for the clearsigned
files we deal with do not reasonably include unsigned parts (like emails
or some such).
This commit changes the silent ignoring to warnings for now to get an
impression on how widespread unintended unsigned parts are, but
eventually we want to turn these into hard errors.
|
|
Note: This is a warning about disabling a security feature. It is
supposed to be scary as we are disabling a security feature and we
can't just be silent about it! Downloads really shouldn't happen
any longer as root to decrease the attack surface – but if a warning
causes that much uproar, consider what an error would do…
The old WARNING message:
| W: Can't drop privileges for downloading as file 'foobar' couldn't be
| accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
is frequently (incorrectly) considered to be an error message indicating
that the download didn't happen which isn't the case, it was performed,
but without all the security features enabled we could have used if run
from some other place…
The word "unsandboxed" is chosen as the term 'sandbox(ed)' is a common
encounter in feature lists/changelogs and more people are hopefully able
to make the connection to 'security' than it is the case for 'privilege
dropping' which is more correct, but far less known.
Closes: #813786
LP: #1522675
|
|
This is a follow up to the previous issue where we did not check
if getline() returned -1 due to an end of file or due to an error
like memory allocation, treating both as end of file.
Here we ensure that we also handle buffered writes correctly by
flushing the files before checking for any errors in our error
stack.
Buffered writes themselves were introduced in 1.1.9, but the
function was never called with a buffered file from inside
apt until commit 46c4043d741cb2c1d54e7f5bfaa234f1b7580f6c
which was first released with apt 1.2.10. The function is
public, though, so fixing this is a good idea anyway.
Affected: >= 1.1.9
|
|
This fixes a security issue where signatures of the
InRelease files could be circumvented in a man-in-the-middle
attack, giving attackers the ability to serve any packages
they want to a system, in turn giving them root access.
It turns out that getline() may not only return EINVAL
as stated in the documentation - it might also return
in case of an error when allocating memory.
This fix not only adds a check that reading worked
correctly, it also implicitly checks that all writes
worked by reporting any other error that occurred inside
the loop and was logged by apt.
Affected: >= 0.9.8
Reported-By: Jann Horn <jannh@google.com>
Thanks: Jann Horn, Google Project Zero for reporting the issue
LP: #1647467
|
|
In ad9416611ab83f7799f2dcb4bf7f3ef30e9fe6f8 we fall back to asking the
original mirror (e.g. a redirector) if we do not get the expected
result. This works for the indexes, but patches are a different beast
and much simpler. Adding this fallback code here seems like overkill as
they are usually right along their Index file, so actually forward the
relevant settings to the patch items which fixes pdiff support combined
with a redirector and partial mirrors as in such a situation the pdiff
patches would be 404 and the complete index would be downloaded.
|
|
We report warnings from apt-key this way already since
29c590951f812d9e9c4f17706e34f2c3315fb1f6, so reporting errors seems like
a good addition. Most of those errors aren't really from apt-key
through, but from the code setting up and actually calling it which used
to just print to stderr which might or might not intermix them with
(other) progress lines in update calls. Having them as proper error
messages in the system means that the errors are actually collected
later on for the list instead of ending up with our relatively generic
but in those cases bogus hint regarding "is gpgv installed?".
The effective difference is minimal as the errors apply mostly to
systems which have far worse problems than a not as nice looking error
message, which makes this pretty hard to test – but at least now the
hint that your system is broken can be read in proper order (= there
aren't many valid cases in which the permissions of /tmp are messed up…).
LP: #1522988
|
|
|
|
We try to configure all packages at the end which need to be configured,
but that also applies to packages which weren't completely installed
(e.g. maintainerscript failed) we end up removing in this interaction
instead.
APT doesn't perform this explicit configure in the end as it is using
"dpkg --configure --pending", but it does confuse the progress report
and potentially also hook scripts.
Regression-Of: 9ffbac99e52c91182ed8ff8678a994626b194e69
|
|
dpkg stumbles over these (#844300) and we haven't dropped 'easier'
removes to be implicit and to be scheduled by dpkg by default so far
so we shouldn't push the decision in such cases to dpkg either.
|
|
Our old idea was to look for the first package which would be "touched"
and take this as the package dpkg is talking about, but that is
incorrect in complicated situations like a package upgraded to/from
multiple M-A:same siblings installed.
As we us the progress report to decide what is still needed we have to
be reasonabily right about the package dpkg is talking about, so we jump
to quite a few loops to get it.
|
|
Given that we use the progress information to skip over actions dpkg has
already done like not purging a package which was already removed and
had no config files or not acting on disappeared packages and such it is
important that apt and dpkg agree on which states the package has to
pass through.
To ensure that we keep tabs on this in the future a warning is added at
the end if apt hasn't seen all the action it was supposed to see. I
can't wait for the first bugreporters to wonder about this…
|
|
If a package is triggered dpkg frequently issues two messages about it
causing us to make a note about it both times which messes up our
planned dpkg actions view. Adding these actions if we have nothing else
planned fixes this and should still be correct as those planned actions
will deal with the triggering just fine and we avoid strange problems
like a package triggered before its removed…
|
|
Our profile says we spend about 5% of the time transforming the
hex digits into the binary format used by HashsumValue, all for
comparing them against the other strings. That makes no sense
at all.
According to callgrind, this reduces the overall instruction
count from 5,3 billion to 5 billion in my example, which
roughly matches the 5%.
|
|
Generating a string for each version we see is somewhat inefficient.
The problem here is that the Description tag names are longer than
15 byte, and thus require an allocation on the heap, which we should
avoid.
It seems reasonable that 20 characters works for all languages codes
used for archive descriptions, but if not, there's a warning, so
we'll catch that.
This should improve performance by about 2%.
|
|
This has the effect of significantly reducing actual string
comparisons, and should improve the performance of FindGrp
a bit, although it's hardly measureable (callgrind says it
uses 10% instructions less now).
|
|
Stop copying stuff, and just parse the bytes one by-one to the
newly created AddCRC16Byte. This improves the instruction count
for an update run from 720,850,121 to 455,801,749 according to
callgrind.
|
|
This one has some obvious collisions for non-alphabetical characters,
like some control characters also hashing to numbers, but we don't
really have those, and these are hash functions which are not
collision free to begin with.
|
|
We already have two stable series with major version 10, and
the next commits will introduce non-backportable performance
changes that affect the cache algorithms, so we need to bump
the major version now to prevent future problems.
|
|
This basically gets rid of 40-50% of the hash table lookups,
making things a bit faster that way, and the profiles look
far cleaner.
|
|
Introduce a new enum class and add functions that can do a lookup
with that enum class. This uses triehash.
|
|
This allows us to add a perfect hash function to the tag file
without having to reimplement the methods a second time.
|
|
Move the use of the AlphaHash to a new second hash table in
preparation for the arrival of the new perfect hash function.
With the new perfect hash function hashing most of the keys for
us, having 128 slots for a fallback hash function seems enough
and prevents us from wasting space.
|
|
We have the last Release file around for other checks, so its trivial to
look if the new Release file contains a new codename (e.g. the user has
"testing" in the sources and it flipped from stretch to buster).
Such a change can be okay and expected, but also be a hint of problems,
so a warning if we see it happen seems okay. We can only print it once
anyhow and frontends and co are likely to ignore/hide it.
|
|
A suite or codename entry in the Release file is checked against the
distribution field in the sources.list entry that lead to the download of that
Release file. This distribution entry can contain slashes in the distribution
field:
deb http://security.debian.org/debian wheezy/updates main
However, the Release file may only contain "wheezy" in the Codename field and
not "wheezy/updates". So a transformation needs to take place that removes the
last / and everything that comes after (e.g. "/updates"). This fails, however,
for valid cases like a reprepro snapshot where the given Codename contains
slashes but is perfectly fine and doesn't need to be transformed. Since that
transformation is essentially just a workaround for special cases like the
security repository, it should be checked if the literal Codename without any
transformations happened is valid and only if isn't the dist should be checked
against the transformated one.
This way special cases like security.debian.org are handled and reprepro
snapshots work too.
The initial patch was taken as insperationto move whole transformation
to CheckDist() which makes this method more accepting & easier to use
(but according to codesearch.d.n we are the only users anyhow).
Thanks: Lukas Anzinger for initial patch
Closes: 644610
|
|
You can pretty much achieve the same with a local dummy package if you
want to, but libapt has an inbuilt setting for essential: "apt" which
can be overridden with this option as well – it could be helpful in
quick tests and what not so adding this alternative shouldn't really
hurt much.
We aren't going to document them much through as care must be taken in
regards to the binary caches as they aren't invalidated by config
options alone, so the effects of old settings could still be in them,
similar to the other already existing pkgCacheGen option(s).
Closes: 767891
Thanks: Anthony Towns for initial patch
|
|
apt tools do not really support these other variables, but tools apt
calls might, so lets play save and clean those up as needed.
Reported-By: Paul Wise (pabs) on IRC
|
|
Reported-By: Christoph Berg (Myon) on IRC
|
|
Some people do not recognize the field value with such an arcane name
and/or expect it to refer to something different (e.g. #839257).
We can't just rename it internally as its an avoidance strategy as such
fieldname existed previously with less clear semantics, but we can spare
the general public from this implementation detail.
|
|
Sometimes you should really act upon your todos.
Especially if you have placed them directly in the code.
Closes: 841874
|
|
We can't cleanup the environment like e.g. sudo would do as you usually
want the environment to "leak" into these helpers, but some variables
like HOME should really not have still the value of the root user – it
could confuse the helpers (USER) and HOME isn't accessible anyhow.
Closes: 842877
|
|
|
|
These new enum values might cause "interesting" behaviour in tools not
expecting them – like an old apt would think a Build-Conflicts-Arch is
some sort of Build-Depends – but that can't reasonably be avoided and
effects only packages using B-D/C-A so if there is any breakage the
tools can easily be adapted.
The APT_PKG_RELEASE number is increased so that libapt users can detect
the availability of these new enum fields via:
#if APT_PKG_ABI > 500 || (APT_PKG_ABI == 500 && APT_PKG_RELEASE >= 1)
Closes: #837395
|
|
A user relying on the deprecated behaviour of apt-get to accept a source
with an unknown pubkey to install a package containing the key expects
that the following 'apt-get update' causes the source to be considered
as trusted, but in case the source hadn't changed in the meantime this
wasn't happening: The source kept being untrusted until the Release file
was changed.
This only effects sources not using InRelease and only apt-get, the apt
binary downright refuses this course of actions, but it is a common way
of adding external sources.
Closes: 838779
|
|
This fixes a regression introduced in
commit 8f858d560e3b7b475c623c4e242d1edce246025a
don't leak FD in AutoProxyDetect command return parsing
which accidentally made the proxy autodetection code also read
the scripts output on stderr, not only on stdout when it switched
the code from popen() to Popen().
Reported-By: Tim Small <tim@seoss.co.uk>
|
|
If the dependency line does not contain spaces in the repository
but does in the dpkg status file (because dpkg normalized the
dependency list), the dpkg line might be longer than the line
in the repository. If it now happens to be longer than 1024
characters, it would be skipped, causing the hashes to be
out of date.
Note that we have to bump the minor cache version again as
this changes the format slightly, and we might get mismatches
with an older src cache otherwise.
Fixes Debian/apt#23
|
|
We need to ignore messages from gcov. All those messages
start with profiling: and are printed using vfprintf(), so
the only thing we can do is add a library overriding those
functions and linking apt-pkg to it.
|
|
Commit b60c8a89c281f2bb945d426d2215cbf8f5760738 improved the situation,
but due to inconsistency mostly for planners, not for solvers. As the
idea of hiding errors if we show another error is a bit scary (as the
extern error might be a followup of our intern error, rather than the
reason for our intern error as it is at the moment) we don't discard the
errors, but if we got an extern error we show them directly removing
them from the error list at the end of the run – that list will contain
the extern error which hopefully gives us the best of both worlds.
The problem itself is the same as before: The externals exiting before
apt is done talking to them.
Reported-By: Johannes 'josch' Schauer on IRC
|
|
Employ a priority queue instead of a normal queue to hold
the items; and only add items to the running pipeline if
their priority is the same or higher than the priority
of items in the queue.
The priorities are designed for a 3 stage pipeline system:
In stage 1, all Release files and .diff/Index files are fetched. This
allows us to determine what files remain to be fetched, and thus
ensures a usable progress reporting.
In stage 2, all Pdiff patches are fetched, so we can apply them
in parallel with fetching other files in stage 3.
In stage 3, all other files are fetched (complete index files
such as Contents, Packages).
Performance improvements, mainly from fetching the pdiff patches
before complete files, so they can be applied in parallel:
For the 01 Sep 2016 03:35:23 UTC -> 02 Sep 2016 09:25:37 update
of Debian unstable and testing with Contents and appstream for
amd64 and i386, update time reduced from 37 seconds to 24-28
seconds.
Previously, apt would first download new DEP11 icon tarballs
and metadata files, causing the CPU to be idle. By fetching
the diffs in stage 2, we can now patch our contents and Packages
files while we are downloading the DEP11 stuff.
|
|
This accidentally used ICONV_DIRECTORIES, which does not
even exist. Weird.
|
|
memcpy is marked as nonnull for its input, but ignores the input anyhow
if the declared length is zero. Our SHA2 implementations do this as
well, it was "just" MD5 and SHA1 missing, so we add the length check
here as well as along the callstack as it is really pointless to do all
these method calls for "nothing".
Reported-By: gcc -fsanitize=undefined
|