Age | Commit message (Collapse) | Author |
|
We still need an API for the targets, so slowly prepare the IndexTargets
to let them take this job.
Git-Dch: Ignore
|
|
We have two places in the code which need to iterate over targets and do
certain things with it. The first one is actually creating these targets
for download and the second instance pepares certain targets for
reading.
Git-Dch: Ignore
|
|
First pass at making the acquire system capable of downloading files
based on configuration rather than hardcoded entries. It is now possible
to instruct 'deb' and 'deb-src' sources.list lines to download more than
just Packages/Translation-* and Sources files. Details on how to do that
can be found in the included documentation file.
|
|
If we have a file on disk and the hashes are the same in the new Release
file and the old one we have on disk we know that if we ask the server
for the file, we will at best get an IMS hit – at worse the server
doesn't support this and sends us the (unchanged) file and we have to
run all our checks on it again for nothing. So, we can save ourselves
(and the servers) some unneeded requests if we figure this out on our
own.
|
|
At the moment we only have hashes for the uncompressed pdiff files, but
via the new '$HASH-Download' field in the .diff/Index hashes can be
provided for the .gz compressed pdiff file, which apt will pick up now
and use to verify the download. Now, we "just" need a buy in from the
creators of repositories…
|
|
Git-Dch: Ignore
|
|
The rred parser is very accepting regarding 'invalid' files. Given that
we can't trust the input it might be a bit too relaxed. In any case,
checking for more errors can't hurt given that we support only a very
specific subset of ed commands.
|
|
rred is responsible for unpacking and reading the patch files in one go,
but we currently only have hashes for the uncompressed patch files, so
the handler read the entire patch file before dispatching it to the
worker which would read it again – both with an implicit uncompress.
Worse, while the workers operate in parallel the handler is the central
orchestration unit, so having it busy with work means the workers do
(potentially) nothing.
This means rred is working with 'untrusted' data, which is bad. Yet,
having the unpack in the handler meant that the untrusted uncompress was
done as root which isn't better either. Now, we have it at least
contained in a binary which we can harden a bit better. In the long run,
we want hashes for the compressed patch files through to be safe.
|
|
Having every item having its own code to verify the file(s) it handles
is an errorprune process and easy to break, especially if items move
through various stages (download, uncompress, patching, …). With a giant
rework we centralize (most of) the verification to have a better
enforcement rate and (hopefully) less chance for bugs, but it breaks the
ABI bigtime in exchange – and as we break it anyway, it is broken even
harder.
It shouldn't effect most frontends as they don't deal with the acquire
system at all or implement their own items, but some do and will need to
be patched (might be an opportunity to use apt on-board material).
The theory is simple: Items implement methods to decide if hashes need to
be checked (in this stage) and to return the expected hashes for this
item (in this stage). The verification itself is done in worker message
passing which has the benefit that a hashsum error is now a proper error
for the acquire system rather than a Done() which is later revised to a
Failed().
|
|
If we e.g. fail on hash verification for Packages.xz its highly unlikely
that it will be any better with Packages.gz, so we just waste download
bandwidth and time. It also causes us always to fallback to the
uncompressed Packages file for which the error will finally be reported,
which in turn confuses users as the file usually doesn't exist on the
mirrors, so a bug in apt is suspected for even trying it…
|
|
Conflicts:
apt-pkg/pkgcache.h
debian/changelog
methods/https.cc
methods/server.cc
test/integration/test-apt-download-progress
|
|
Conflicts:
apt-pkg/deb/dpkgpm.cc
|
|
Add a regression test that reproduced the hang of apt when a
partial file is present.
Git-Dch: ignore
|
|
The apt http code parses Content-Length and Content-Range. For
both requests the variable "Size" is used and the semantic for
this Size is the total file size. However Content-Length is not
the entire file size for partital file requests. For servers that
send the Content-Range header first and then the Content-Length
header this can lead to globbing of Size so that its less than
the real file size. This may lead to a subsequent passing of a
negative number into the CircleBuf which leads to a endless
loop that writes data.
Thanks to Anton Blanchard for the analysis and initial patch.
LP: #1445239
|
|
Valid-Until protects us from long-living downgrade attacks, but not all
repositories have it and an attacker could still use older but still
valid files to downgrade us. While this makes it sounds like a security
improvement now, its a bit theoretical at best as an attacker with
capabilities to pull this off could just as well always keep us days
(but in the valid period) behind and always knows which state we have,
as we tell him with the If-Modified-Since header. This is also why this
is 'silently' ignored and treated as an IMSHit rather than screamed at
the user as this can at best be an annoyance for attackers.
An error here would 'regularily' be encountered by users by out-of-sync
mirrors serving a single run (e.g. load balancer) or in two consecutive
runs on the other hand, so it would just help teaching people ignore it.
That said, most of the code churn is caused by enforcing this additional
requirement. Crisscross from InRelease to Release.gpg is e.g. very
unlikely in practice, but if we would ignore it an attacker could
sidestep it this way.
|
|
Not all servers we are talking to support If-Modified-Since and some are
not even sending Last-Modified for us, so in an effort to detect such
hits we run a hashsum check on the 'old' compared to the 'new' file, we
got the hashes for the 'new' already for "free" from the methods anyway
and hence just need to calculated the old ones.
This allows us to detect hits even with unsupported servers, which in
turn means we benefit from all the new hit behavior also here.
|
|
If we have the expected hashes we can check with them if the file we
have in partial we got a 416 for is the expected file. We detected this
with same-size before, but not every server sends a good Content-Range
header with a 416 response.
|
|
While it is mostly busywork to rewrite all instances it actually fixes
bugs as the data storage used by the new method is std::string rather
than a char*, the later mostly created by c_str() from a std::string
which the caller has to ensure keeps in scope – something apt-ftparchive
actually didn't ensure and relied on copy-on-write behavior instead
which c++11 forbids and hence the new default gcc abi doesn't use it.
|
|
TFRewrite is okay, but it has obscure limitations (256 Tags), even more
obscure bugs (order for renames is defined by the old name) and the
interface is very c-style encouraging bad usage like we do it in
apt-ftparchive passing massive amounts of c_str() from std::string in.
The old-style is marked as deprecated accordingly. The next commit will
fix all places in the apt code to not use the old-style anymore.
|
|
dpkg and dak know various field names and order them in their output,
while we have yet another order and have to play catch up with them as
we are sitting between chairs here and neither order is ideal for us,
too.
A little testcase is from now on supposed to help ensureing that we do
not derivate to far away from which fields dpkg knows and orders.
|
|
dpkg -l < 1.16.2 loads the available file and hence sees a package which
later versions do not see, leading to failures on travis-ci.
The different versions also have slightly different messages.
Git-Dch: Ignore
|
|
If the pin for a generic pin is 0, it get a value by strange looking
rules, if the pin is specific the rules are at least not strange, but
the value 989 is a magic number without any direct meaning… but both
never happens in practice as the parsing skips such entries with a
warning, so there always is a priority != 0 and the code therefore never
used.
|
|
The documentation says this, but the code only agreed while evaluating
specific packages, but not generics. These needed a pin above 1000 to
have the same effect.
The code causing this makes references to a 'second pesduo status file',
but nowhere is explained what this might stand for and/or what it was,
so we do the only reasonable thing: Remove all references and do as
documented.
|
|
Git-Dch: Ignore
|
|
Especially pdiff-enhanced downloads have the tendency to fail for
various reasons from which we can recover and even a successful download
used to leave the old unpatched index in partial/.
By adding a new method responsible for making the transaction of an
individual file happen we can at specialisations especially for abort
cases to deal with the cleanup.
This also helps in keeping the compressed indexes around if another
index failed instead of keeping the decompressed files, which we
wouldn't pick up in the next call.
|
|
The fix for #777760 causes packages of foreign (and the native)
architectures, to be created correctly, but invalidates (like the
previously existing, but policy-forbidden architecture-less packages
we had to support for some upgrade scenarios) the assumption that the
first (and only) package in the cache for a single architecture system
must be the package for the native architecture (as, where should the
other architectures come from, right? Wrong.).
Depending on the order of parsing sources more or less packages can be
effected by this. The effects are strange (for apt it mostly effects
simulation/debug output, but also apt-mark on these specific packages),
which complicates debugging, but relatively harmless if understood as
most actions do not need direct named access to packages.
The problem is fixed by removing the single-arch special casing in the
paths who had them (Cache.FindPkg), so they use the same code as
multi-arch systems, which use them as a wrapper for Grp.FindPkg.
Note that single-arch system code was using Grp.FindPkg before as well
if a Grp structure was handily available, so we don't introduce new
untested code here: We remove more brittle special cases which are less
tested instead (this was planed to be done for Stretch anyhow).
Note further that the method with the assumption itself isn't fixed. As
it is a private method I opted for declaring it deprecated instead and
remove all its call positions. As it is private no-one can call this
method legally (thanks to how c++ works by default its still an exported
symbol through) and fixing it basically means reimplementing code we
already have in Grp.FindPkg.
Removing rather than fixing seems hence like a good solution.
Closes: 782777
Thanks: Axel Beckert for testing
|
|
Conflicts:
apt-pkg/acquire-item.cc
cmdline/apt-key.in
methods/https.cc
test/integration/test-apt-key
test/integration/test-multiarch-foreign
|
|
If we get a IMSHit for the Transaction-Manager (= the InRelease file or
as its still supported fallback Release + Release.gpg combo) we can
assume that every file we would queue based on this manager, but already
have locally is current and hence would get an IMSHit, too. We therefore
save us and the server the trouble and skip the queuing in this case.
Beside speeding up repetative executions of 'apt-get update' this way we
also avoid hitting hashsum errors if the indexes are in fact already
updated, but the Release file isn't yet as it is the case on well
behaving mirrors as Release files is updated last.
The implementation is a bit harder than the theory makes it sound as we
still have to keep reverifying the Release files (e.g. to detect now expired
once to avoid an attacker being able to silently stale us) and have to
handle cases in which the Release file hits, but some indexes aren't
present (e.g. user added a new foreign architecture).
|
|
Its a bit unpredictable which permissons and owners we will encounter on
a CD-ROM (or a USB stick, as apt-cdrom is responsible for those too),
so we have to ensure in this codepath as well that everything is nicely
setup without waiting for a 'apt-get update' to fix up the (potential)
mess.
|
|
We do this in HTTP already to give the CPU some exercise while the disk
is heavily spinning (or flashing?) to store the data avoiding the need
to reread the entire file again later on to calculate the hashes – which
happens outside of the eyes of progress reporting, so you might ended up
with a bunch of https workers 'stuck' at 100% while they were busy
calculating hashes.
This is a bummer for everyone using apt as a connection speedtest as the
https method works slower now (not really, it just isn't reporting done
too early anymore).
|
|
Methods get told which hashes are expected by the acquire system, which
means we can use this list to restrict what we calculate in the methods
as any extra we are calculating is wasted effort as we can't compare it
with anything anyway.
Adding support for a new hash algorithm is therefore 'free' now and if a
algorithm is no longer provided in a repository for a file, we
automatically stop calculating it.
In practice this results in a speed-up in Debian as we don't have SHA512
here (so far), so we practically stop calculating it.
|
|
Servers who advertise that they close the connection get the 'Closes'
encoding flag, but this conflicts with servers who response with a
transfer-encoding (e.g. encoding) as it is saved in the same flag.
We have a better flag for the keep-alive (or not) of the connection
anyway, so we check this instead of the encoding.
This is in practice not much of a problem as real servers we talk to are
HTTP1.1 servers (with keep-alive) and there isn't much point in doing
chunked encoding if you are going to close anyway, but our simple
testserver stumbles over this if pressed and its a bit cleaner, too.
Git-Dch: Ignore
|
|
file sends information about the uncompressed file if it can find it as
well as for the compressed file. This was done only for gzip so far, but
we support more compression types. That this information isn't used a
lot is a different story.
Git-Dch: Ignore
|
|
The worker expects that the methods tell him when they start or finish
downloading a file. Various information pieces are passed along in this
report including the (expected) filesize. https was using a "global"
struct for reporting which made it 'reuse' incorrect values in some
cases like a non-existent InRelease fallbacking to Release{,.gpg}
resulting in a size-mismatch warning. Reducing the scope and redesigning
the setting of the values we can fix this and related issues.
Closes: 777565, 781509
Thanks: Robert Edmonds and Anders Kaseorg for initial patchs
|
|
We just need it for unit tests and our debian/rules file actually skips
calling them if nocheck is given… but this fails anyhow as we declared a
hard-dependency on it. Demoting the error to a warning in configuration
and adding a test in the 'make test' path with a friendly message allows
nocheck to be useful again.
(Running unit tests is fully encouraged of course, but bootstrappers and
co do not need to be burdened with this stuff)
|
|
On single-arch the parsing was creating groupnames like 'apt:amd64' even
through it should be 'apt' and a package in it belonging to architecture
amd64. The result for foreign architectures was as expected: The
dependency isn't satisfiable, but for native architecture it means the
wrong package (ala apt:amd64:amd64) is linked so this is also not
satisfiable, which is very much not expected.
No longer excluding single-arch from this codepath allows the generation
of the correct links, which still link to non-exisiting packages for
foreign dependencies, but natives link to the expected native package
just as if no architecture was given.
For negative arch-specific dependencies ala Conflicts this matter was
worse as apt will believe there isn't a Conflict to resolve, tricking it
into calculating a solution dpkg will refuse.
Architecture specific positive dependencies are rare in jessie – the
only one in amd64 main is foreign –, negative dependencies do not even
exist. Neither class has a native specimen, so no package in jessie is
effected by this bug, but it might be interesting for stretch upgrades.
This also means the regression potential is very low.
Closes: 777760
|
|
In #780028 we were discussing how the or-group order should be more
important than keep-back decisions of 'upgrade'. We have this behaviour,
but to ensure it stays this way lets add a test for it.
Git-Dch: Ignore
|
|
This isn't testing much of the 'complex' parts,
but its better than nothing for now.
Git-Dch: Ignore
|
|
Working with strings c-style is complicated and error-prune,
so by converting to c++ style we gain some simplicity and
avoid buffer overflows by later extensions.
Git-Dch: Ignore
|
|
gnupg is case-insensitive about keyids, so back then apt-key called it
directly any keyid was accepted, but now that we work more with the
keyid ourself we regressed to require uppercase keyids by accident.
This is also inconsistent with other apt-key commands which still use
gnupg directly. A single case-insensitive grep and we are fine again.
Closes: 781696
|
|
|
|
As part of the “reproducible builds” effort [1], we have noticed that
apt could not be built reproducibly.
One issue is that it uses the __DATE__ and __TIME__ macros of the C
preprocessor to display the time of build in the online help. We believe
this information not to be really useful to users as they can always
look at the package data and metadata to figure it out.
The attached patch simply removes this information. All
non-documentation packages can then be built reproducibly with our
current experimental framework.
[David: changed the string slightly to be untranslateable as well]
Closes: 774342
|
|
We use test{success,failure} now all over the place in the framework, so
its only consequencial to do this in the situations in which we test for
a specific output as well.
Git-Dch: Ignore
|
|
Bug #778375 uncovered that https wasn't properly integrated in the class
family tree of http as it was supposed to be leading to a NULL pointer
dereference. Fixing this 'properly' was deemed to much diff for
practically no gain that late in the release, so commit
0c2dc43d4fe1d026650b5e2920a021557f9534a6 just fixed the synptom, while
this commit here is fixing the cause plus adding a test.
|
|
|
|
We are able to run maintainer scripts if needed before, but we need to
set ADMINDIR to be able to call dpkg tools like dpkg-trigger inside of
them.
Git-Dch: Ignore
|
|
Commit 9ec748ff103840c4c65471ca00d3b72984131ce4 from Feb 23 last year
adds a version check after 8daf68e366fa9fa2794ae667f51562663856237c
added 8 days earlier negative points for breaks/conflicts with the
intended that only dependencies which are satisfied propagate points
(aka: old conflicts do not).
The implementation was needlessly complex and flawed through preventing
positive dependencies from gaining points like they did before these
commits making library transitions harder instead of simpler. It worked
out anyhow most of the time out of pure 'luck' (and other ways of
gaining points) or got miss attributed to being a temporary hick-up.
Closes: 774924
|
|
Your mileage may vary, but don't worry: There is more than one way to
do it, but our one size fits all is not a bigger hammer, but an entire
roundhouse kick! So brace yourself for the tl;dr: The limit is gone.*
Beware: This fixes also the problem that a double newline is
unconditionally added 'later' which is an overcommitment in case
the dsc filesize is limit-2 <= x <= limit.
* limited to numbers fitting into an unsigned long long.
Closes: 774893
|
|
The issue was that https.cc never called URIStart(), one way to
detect this is that no download progress is generated without
this call. The test now checks for this and as a side-effect will
also ensure that we do not break download progress reporting and
Acquire::{http,https}::Dl-Limit accidently.
|
|
Commit 299aea924ccef428219ed6f1a026c122678429e6 fixes the problem of
not logging terminal in case stdin & stdout are not a terminal. The
problem is that we are then trying to pass-through stdin content by
reading from the apt-process stdin and writing it to the stdin of the
child (dpkg), which works great for users who can control themselves,
but pipes and co are a bit less forgiving causing us to pass everything
to the first child process, which if the sending part of the pipe is
e.g. 'yes' we will never see the end of it (as the pipe is full at some
point and further writing blocks).
There is a simple solution for that of course: If stdin isn't a terminal,
we us the apt-process stdin as stdin for the child directly (We don't do
this if it is a terminal to be able to save the typed input in the log).
Closes: 773061
|