Age | Commit message (Collapse) | Author |
|
Git-Dch: Ignore
|
|
Git-Dch: Ignore
|
|
The md5sum hash is broken since some time and we should no longer
consider it a usable hash. Also update the tests to reflect this.
|
|
testsuccess checks the return code, but it does also some autotests
based on the command like grepping for dpkg warnings in a apt-get
install call – but if this finds something it is just showing the grep
command. With this change it will additionally show the first msgtest
which in this case will detail the actual apt-get install call.
Git-Dch: Ignore
|
|
Not-quiet output is very verbose and with our growing array of tests
generates many many lines which e.g. kills the log display in travis-ci
and obscures failures and uncatched output in a wall of details.
The -q mode fixed this by callapsing passed tests to a single P and now
with some rework we can even get failures properly displayed with the
message from msgtest.
Git-Dch: Ignore
|
|
While Target{,-Add,-Remove} is available for configuring IndexTargets
already, allow Targets to be mentioned explicitely as yes/no options as
well, so that the Target 'Contents' can be disabled via 'Contents: no'
as well as 'Target-Remove: Contents'.
|
|
Reported-By: codespell
|
|
Thanks: Steve Slangasek for the suggestion
Closes: 695633
|
|
This makes travis-ci able to run our tests again.
Sometimes.
If it doesn't spontaneously fails with internal gcc errors…
Git-Dch: Ignore
|
|
Having the handling in MarkInstall means that it just effects
installation of the metapackage, but if the dependencies change the new
dependencies aren't protected (and the old dependencies are still
protected for no 'reason'). Having it in MarkDelete means that if a
metapackage is sheduled for removal all its currently installed
dependencies are marked as manual, which helps against both as in this
case there is no new/old and additionally if a user decides the
installation of a metapackage was wrong he can just remove it
explicitely avoid the manual marking entirely.
|
|
In 50ef3344c3afaaf9943142906b2f976a0337d264 (and similar for other
branches), while 'fixing' the edgecase of a package being in multiple
sections (e.g. moved from libs to oldlibs in newer releases) I
accidently broke the feature itself completely by operating on the
package itself and no longer on its dependencies…
The behaviour isn't ideal in multiple ways, which we are hopefully able
to fix with new ideas as mentioned in the buglog, but until then the
functionality of this "hack" should be restored.
Reported-By: Raphaël Hertzog <hertzog@debian.org>
Tested-By: Adam Conrad <adconrad@ubuntu.com>
Closes: 793360
LP: 1479207
Thanks: Raphaël Hertzog and Adam Conrad for detailed reports and initial patches
|
|
Trade deduplication of code for a bunch of new virtuals, so it is
actually visible how the different indexes behave cleaning up the
interface at large in the process.
Git-Dch: Ignore
|
|
Sources are usually defined in sources.list (and co) and are pretty
stable, but once in a while a frontend might want to add an additional
"source" like a local .deb file to install this package (No support for
'real' sources being added this way as this is a multistep process).
We had a hack in place to allow apt-get and apt to pull this of for a
short while now, but other frontends are either left in the cold by this
and/or the code for it looks dirty with FIXMEs plastering it and has on
top of this also some problems (like including these 'volatile' sources
in the srcpkgcache.bin file).
So the biggest part in this commit is actually the rewrite of the cache
generation as it is now potentially a three step process. The biggest
problem with adding support now through is that this makes a bunch of
previously mostly unusable by externs and therefore hidden classes
public, so a bit of further tuneing on this now public API is in order…
|
|
Limits which key(s) can be used to sign a repository. Not immensely useful
from a security perspective all by itself, but if the user has
additional measures in place to confine a repository (like pinning) an
attacker who gets the key for such a repository is limited to its
potential and can't use the key to sign its attacks for an other (maybe
less limited) repository… (yes, this is as weak as it sounds, but having
the capability might come in handy for implementing other stuff later).
|
|
Various small leaks here and there. Nothing particularily big, but still
good to fix. Found by the sanitizers while running our testcases.
Reported-By: gcc -fsanitize
Git-Dch: Ignore
|
|
Progress reports once in a while which is a bit to unpredictable for
testcases, so we enforce a steady progress for them in the hope that
this makes the tests (mostly test-apt-progress-fd) a bit more stable.
Git-Dch: Ignore
|
|
It shouldn't be too common, but sometimes people have multiple mirrors
in the sources or otherwise repositories with the same content. Now that
we gracefully can handle multiple requests to the same URI, we can also
fold multiple requests with the same expected hashes into one. Note that
this isn't trying to find oppertunities for merging, but just merges if
it happens to encounter the oppertunity for it.
This is most obvious in the new testcase actually as it needs to delay
the action to give the acquire system enough time to figure out that
they can be merged.
|
|
Provided is a specialized acquire item which given a version can figure
out the correct URI to try by itself and if not provides an error
message alongside with static methods to get just the URI it would try
to download if it should just be displayed or similar such.
The URI is constructed as follows:
Release files can provide an URI template in the "Changelogs" field,
otherwise we lookup a configuration item based on the "Label" or
"Origin" of the Release file to get a (hopefully known) default value
for now. This template should contain the string CHANGEPATH which is
replaced with the information about the version we want the changelog
for (e.g. main/a/apt/apt_1.1). This middleway was choosen as this path
part was consistent over the three known implementations (+1 defunct),
while the rest of the URI varies widely between them.
The benefit of this construct is that it is now easy to get changelogs
for Debian packages on Ubuntu and vice versa – even at the moment where
the Changelogs field is present nowhere. Strictly better than what
apt-get had before as it would even fail to get changelogs from
security… Now it will notice that security identifies as Origin: Debian
and pick this setting (assuming again that no Changelogs field exists).
If on the other hand security would ship its changelogs in a different
location we could set it via the Label option overruling Origin.
Closes: 687147, 739854, 784027, 787190
|
|
We used to read the Release file for each Packages file and store the
data in the PackageFile struct even through potentially many Packages
(and Translation-*) files could use the same data. The point of the
exercise isn't the duplicated data through. Having the Release files as
first-class citizens in the Cache allows us to properly track their
state as well as allows us to use the information also for files which
aren't in the cache, but where we know to which Release file they
belong (Sources are an example for this).
This modifies the pkgCache structs, especially the PackagesFile struct
which depending on how libapt users access the data in these structs can
mean huge breakage or no visible change. As a single data point:
aptitude seems to be fine with this. Even if there is breakage it is
trivial to fix in a backportable way while avoiding breakage for
everyone would be a huge pain for us.
Note that not all PackageFile structs have a corresponding ReleaseFile.
In particular the dpkg/status file as well as *.deb files have not. As
these have only a Archive property need, the Component property takes
over this duty and the ReleaseFile remains zero. This is also the reason
why it isn't needed nor particularily recommended to change from
PackagesFile to ReleaseFile blindly. Sticking with the earlier is
usually the better option.
|
|
We have two places in the code which need to iterate over targets and do
certain things with it. The first one is actually creating these targets
for download and the second instance pepares certain targets for
reading.
Git-Dch: Ignore
|
|
If we have a file on disk and the hashes are the same in the new Release
file and the old one we have on disk we know that if we ask the server
for the file, we will at best get an IMS hit – at worse the server
doesn't support this and sends us the (unchanged) file and we have to
run all our checks on it again for nothing. So, we can save ourselves
(and the servers) some unneeded requests if we figure this out on our
own.
|
|
Valid-Until protects us from long-living downgrade attacks, but not all
repositories have it and an attacker could still use older but still
valid files to downgrade us. While this makes it sounds like a security
improvement now, its a bit theoretical at best as an attacker with
capabilities to pull this off could just as well always keep us days
(but in the valid period) behind and always knows which state we have,
as we tell him with the If-Modified-Since header. This is also why this
is 'silently' ignored and treated as an IMSHit rather than screamed at
the user as this can at best be an annoyance for attackers.
An error here would 'regularily' be encountered by users by out-of-sync
mirrors serving a single run (e.g. load balancer) or in two consecutive
runs on the other hand, so it would just help teaching people ignore it.
That said, most of the code churn is caused by enforcing this additional
requirement. Crisscross from InRelease to Release.gpg is e.g. very
unlikely in practice, but if we would ignore it an attacker could
sidestep it this way.
|
|
Not all servers we are talking to support If-Modified-Since and some are
not even sending Last-Modified for us, so in an effort to detect such
hits we run a hashsum check on the 'old' compared to the 'new' file, we
got the hashes for the 'new' already for "free" from the methods anyway
and hence just need to calculated the old ones.
This allows us to detect hits even with unsupported servers, which in
turn means we benefit from all the new hit behavior also here.
|
|
If we have the expected hashes we can check with them if the file we
have in partial we got a 416 for is the expected file. We detected this
with same-size before, but not every server sends a good Content-Range
header with a 416 response.
|
|
dpkg and dak know various field names and order them in their output,
while we have yet another order and have to play catch up with them as
we are sitting between chairs here and neither order is ideal for us,
too.
A little testcase is from now on supposed to help ensureing that we do
not derivate to far away from which fields dpkg knows and orders.
|
|
Especially pdiff-enhanced downloads have the tendency to fail for
various reasons from which we can recover and even a successful download
used to leave the old unpatched index in partial/.
By adding a new method responsible for making the transaction of an
individual file happen we can at specialisations especially for abort
cases to deal with the cleanup.
This also helps in keeping the compressed indexes around if another
index failed instead of keeping the decompressed files, which we
wouldn't pick up in the next call.
|
|
If we get a IMSHit for the Transaction-Manager (= the InRelease file or
as its still supported fallback Release + Release.gpg combo) we can
assume that every file we would queue based on this manager, but already
have locally is current and hence would get an IMSHit, too. We therefore
save us and the server the trouble and skip the queuing in this case.
Beside speeding up repetative executions of 'apt-get update' this way we
also avoid hitting hashsum errors if the indexes are in fact already
updated, but the Release file isn't yet as it is the case on well
behaving mirrors as Release files is updated last.
The implementation is a bit harder than the theory makes it sound as we
still have to keep reverifying the Release files (e.g. to detect now expired
once to avoid an attacker being able to silently stale us) and have to
handle cases in which the Release file hits, but some indexes aren't
present (e.g. user added a new foreign architecture).
|
|
Its a bit unpredictable which permissons and owners we will encounter on
a CD-ROM (or a USB stick, as apt-cdrom is responsible for those too),
so we have to ensure in this codepath as well that everything is nicely
setup without waiting for a 'apt-get update' to fix up the (potential)
mess.
|
|
file sends information about the uncompressed file if it can find it as
well as for the compressed file. This was done only for gzip so far, but
we support more compression types. That this information isn't used a
lot is a different story.
Git-Dch: Ignore
|
|
The worker expects that the methods tell him when they start or finish
downloading a file. Various information pieces are passed along in this
report including the (expected) filesize. https was using a "global"
struct for reporting which made it 'reuse' incorrect values in some
cases like a non-existent InRelease fallbacking to Release{,.gpg}
resulting in a size-mismatch warning. Reducing the scope and redesigning
the setting of the values we can fix this and related issues.
Closes: 777565, 781509
Thanks: Robert Edmonds and Anders Kaseorg for initial patchs
|
|
We use test{success,failure} now all over the place in the framework, so
its only consequencial to do this in the situations in which we test for
a specific output as well.
Git-Dch: Ignore
|
|
|
|
We are able to run maintainer scripts if needed before, but we need to
set ADMINDIR to be able to call dpkg tools like dpkg-trigger inside of
them.
Git-Dch: Ignore
|
|
Your mileage may vary, but don't worry: There is more than one way to
do it, but our one size fits all is not a bigger hammer, but an entire
roundhouse kick! So brace yourself for the tl;dr: The limit is gone.*
Beware: This fixes also the problem that a double newline is
unconditionally added 'later' which is an overcommitment in case
the dsc filesize is limit-2 <= x <= limit.
* limited to numbers fitting into an unsigned long long.
Closes: 774893
|
|
dpkg checks now for dependencies before running triggers, so that
packages can now end up in trigger states (especially those we are not
touching at all with our calls) after apt is done running.
The solution to this is trivial: Just tell dpkg to configure everything
after we have (supposely) configured everything already. In the worst
case this means dpkg will have to run a bunch of triggers, usually it
will just do nothing though.
The code to make this happen was already available, so we just flip a
config option here to cause it to be run. This way we can keep
pretending that triggers are an implementation detail of dpkg.
--triggers-only would supposely work as well, but --configure is more
robust in regards to future changes to dpkg and something we will
hopefully make use of in future versions anyway (as it was planed at the
time this and related options were implemented).
Note that dpkg currently has a workaround implemented to allow upgrades
to jessie to be clean, so that the test works before and after. Also
note that test (compared to the one in the bug) drops the await test as
its is considered a loop by dpkg now.
Closes: 769609
|
|
If we have no controlling terminal opening a terminal will make this
terminal our controller, which is a serious problem if this happens to
be the pseudo terminal we created to run dpkg in as we will close this
terminal at the end hanging ourself up in the process…
The offending open is the one we do to have at least one slave fd open
all the time, but for good measure, we apply the flag also to the slave
fd opening in the child process as we set the controlling terminal
explicitely here.
This is a regression from 150bdc9ca5d656f9fba94d37c5f4f183b02bd746 with
the slight twist that this usecase was silently broken before in that it
wasn't logging the output in term.log (as a pseudo terminal wasn't
created).
Closes: 772641
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
(The tests and their binary helpers had to be slightly modified to
apply, but the patch to fix the issue itself is unchanged.)
Closes: 768797
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
Closes: 768797
|
|
dpkg checks now for dependencies before running triggers, so that
packages can now end up in trigger states (especially those we are not
touching at all with our calls) after apt is done running.
The solution to this is trivial: Just tell dpkg to configure everything
after we have (supposely) configured everything already. In the worst
case this means dpkg will have to run a bunch of triggers, usually it
will just do nothing though.
The code to make this happen was already available, so we just flip a
config option here to cause it to be run. This way we can keep
pretending that triggers are an implementation detail of dpkg.
--triggers-only would supposely work as well, but --configure is more
robust in regards to future changes to dpkg and something we will
hopefully make use of in future versions anyway (as it was planed at the
time this and related options were implemented).
Closes: 769609
|
|
We were already considering these cases, but the code was flawed, so
that packages changing architectures are incorrectly handled and hence
the wrong architecture is used to call dpkg with, so that dpkg says the
package isn't installed (which it isn't for the requested architecture).
Closes: 770898
|
|
The fd moves out of scope here anyway, so we should close it properly
instead of leaking it which will tickle down to dpkg maintainer scripts.
Closes: 767774
|
|
The fd moves out of scope here anyway, so we should close it properly
instead of leaking it which will tickle down to dpkg maintainer scripts.
Closes: 767774
|
|
While on linux files are created in /tmp with $USER:$USER, on my
kfreebsd testmachine they are created with $USER:root, so we pull some
strings here to make it work on both.
|
|
We autocreate for a while now the last two directories in /var/lib/apt/lists
(similar for /var/cache/apt/archives) which is very nice for systems having
any of those on tmpfs or other non-persistent storage. This also means
though that this creation is effected by the default umask, so for
people with aggressive umasks like 027 the directories will be created
with 750, which means all non-root users are left out, which is usually
exactly what we want then this umask is set, but the cache and lib
directories contain public knowledge. There isn't any need to protect
them from viewers and they render apt completely useless if not
readable.
|
|
Usually they don't provide a lot in terms of what they test, but they
help in covering many lines from strictly anecdotal commands (stats,
moo) and error messages, so that stuff which really needs to be tested,
but isn't is better visible in coverage reports.
Git-Dch: Ignore
|
|
We create our own directories here and work without root in them, so we
can also test the locking with them as it is how we usually operate.
Git-Dch: Ignore
|
|
The tool checks for debconf version before printing the info it
extracts, so that it doesn't extract data which can't be interpreted
before debconf is upgraded. It is only fair to check for this behaviour
in the tests.
Git-Dch: Ignore
|
|
By convention, if I run a tool with --help or --version I expect it to
exit successfully with the usage, while if I do call it wrong (like
without any parameters) I expect the usage message shown with a non-zero
exit.
|
|
Git-Dch: Ignore
|
|
It is sometimes handy to have an installed package also in the archive,
but this was until now harder than it should as you had to duplicate the
lines, which is especially dangerous while writing the tests as it
easily happens that these two lines divert and so the same-but-different
version detection kicks in.
Git-Dch: Ignore
|