Age | Commit message (Collapse) | Author |
|
This could allow an attacker to mark a package as installed in a
remote package index, as long as the package was not listed in
the dpkg status file.
This way, an attacker could force the installation of a package
during a dist-upgrade, by providing two packages in an index,
an older marked as installed, and a newer - apt would "upgrade"
to the newer version.
|
|
In 50ef3344c3afaaf9943142906b2f976a0337d264 (and similar for other
branches), while 'fixing' the edgecase of a package being in multiple
sections (e.g. moved from libs to oldlibs in newer releases) I
accidently broke the feature itself completely by operating on the
package itself and no longer on its dependencies…
The behaviour isn't ideal in multiple ways, which we are hopefully able
to fix with new ideas as mentioned in the buglog, but until then the
functionality of this "hack" should be restored.
Reported-By: Raphaël Hertzog <hertzog@debian.org>
Tested-By: Adam Conrad <adconrad@ubuntu.com>
Closes: 793360
LP: 1479207
Thanks: Raphaël Hertzog and Adam Conrad for detailed reports and initial patches
|
|
Which, in this cherrypick is actually 7491bed8.
Git-Dch: Ignore
|
|
Git-Dch: Ignore
|
|
Conflicts:
apt-pkg/deb/dpkgpm.cc
|
|
Add a regression test that reproduced the hang of apt when a
partial file is present.
Git-Dch: ignore
|
|
The apt http code parses Content-Length and Content-Range. For
both requests the variable "Size" is used and the semantic for
this Size is the total file size. However Content-Length is not
the entire file size for partital file requests. For servers that
send the Content-Range header first and then the Content-Length
header this can lead to globbing of Size so that its less than
the real file size. This may lead to a subsequent passing of a
negative number into the CircleBuf which leads to a endless
loop that writes data.
Thanks to Anton Blanchard for the analysis and initial patch.
LP: #1445239
|
|
The fix for #777760 causes packages of foreign (and the native)
architectures, to be created correctly, but invalidates (like the
previously existing, but policy-forbidden architecture-less packages
we had to support for some upgrade scenarios) the assumption that the
first (and only) package in the cache for a single architecture system
must be the package for the native architecture (as, where should the
other architectures come from, right? Wrong.).
Depending on the order of parsing sources more or less packages can be
effected by this. The effects are strange (for apt it mostly effects
simulation/debug output, but also apt-mark on these specific packages),
which complicates debugging, but relatively harmless if understood as
most actions do not need direct named access to packages.
The problem is fixed by removing the single-arch special casing in the
paths who had them (Cache.FindPkg), so they use the same code as
multi-arch systems, which use them as a wrapper for Grp.FindPkg.
Note that single-arch system code was using Grp.FindPkg before as well
if a Grp structure was handily available, so we don't introduce new
untested code here: We remove more brittle special cases which are less
tested instead (this was planed to be done for Stretch anyhow).
Note further that the method with the assumption itself isn't fixed. As
it is a private method I opted for declaring it deprecated instead and
remove all its call positions. As it is private no-one can call this
method legally (thanks to how c++ works by default its still an exported
symbol through) and fixing it basically means reimplementing code we
already have in Grp.FindPkg.
Removing rather than fixing seems hence like a good solution.
Closes: 782777
Thanks: Axel Beckert for testing
|
|
On single-arch the parsing was creating groupnames like 'apt:amd64' even
through it should be 'apt' and a package in it belonging to architecture
amd64. The result for foreign architectures was as expected: The
dependency isn't satisfiable, but for native architecture it means the
wrong package (ala apt:amd64:amd64) is linked so this is also not
satisfiable, which is very much not expected.
No longer excluding single-arch from this codepath allows the generation
of the correct links, which still link to non-exisiting packages for
foreign dependencies, but natives link to the expected native package
just as if no architecture was given.
For negative arch-specific dependencies ala Conflicts this matter was
worse as apt will believe there isn't a Conflict to resolve, tricking it
into calculating a solution dpkg will refuse.
Architecture specific positive dependencies are rare in jessie – the
only one in amd64 main is foreign –, negative dependencies do not even
exist. Neither class has a native specimen, so no package in jessie is
effected by this bug, but it might be interesting for stretch upgrades.
This also means the regression potential is very low.
Closes: 777760
|
|
gnupg is case-insensitive about keyids, so back then apt-key called it
directly any keyid was accepted, but now that we work more with the
keyid ourself we regressed to require uppercase keyids by accident.
This is also inconsistent with other apt-key commands which still use
gnupg directly. A single case-insensitive grep and we are fine again.
Closes: 781696
|
|
|
|
Commit 9ec748ff103840c4c65471ca00d3b72984131ce4 from Feb 23 last year
adds a version check after 8daf68e366fa9fa2794ae667f51562663856237c
added 8 days earlier negative points for breaks/conflicts with the
intended that only dependencies which are satisfied propagate points
(aka: old conflicts do not).
The implementation was needlessly complex and flawed through preventing
positive dependencies from gaining points like they did before these
commits making library transitions harder instead of simpler. It worked
out anyhow most of the time out of pure 'luck' (and other ways of
gaining points) or got miss attributed to being a temporary hick-up.
Closes: 774924
|
|
Your mileage may vary, but don't worry: There is more than one way to
do it, but our one size fits all is not a bigger hammer, but an entire
roundhouse kick! So brace yourself for the tl;dr: The limit is gone.*
Beware: This fixes also the problem that a double newline is
unconditionally added 'later' which is an overcommitment in case
the dsc filesize is limit-2 <= x <= limit.
* limited to numbers fitting into an unsigned long long.
Closes: 774893
|
|
The issue was that https.cc never called URIStart(), one way to
detect this is that no download progress is generated without
this call. The test now checks for this and as a side-effect will
also ensure that we do not break download progress reporting and
Acquire::{http,https}::Dl-Limit accidently.
|
|
Commit 299aea924ccef428219ed6f1a026c122678429e6 fixes the problem of
not logging terminal in case stdin & stdout are not a terminal. The
problem is that we are then trying to pass-through stdin content by
reading from the apt-process stdin and writing it to the stdin of the
child (dpkg), which works great for users who can control themselves,
but pipes and co are a bit less forgiving causing us to pass everything
to the first child process, which if the sending part of the pipe is
e.g. 'yes' we will never see the end of it (as the pipe is full at some
point and further writing blocks).
There is a simple solution for that of course: If stdin isn't a terminal,
we us the apt-process stdin as stdin for the child directly (We don't do
this if it is a terminal to be able to save the typed input in the log).
Closes: 773061
|
|
dpkg checks now for dependencies before running triggers, so that
packages can now end up in trigger states (especially those we are not
touching at all with our calls) after apt is done running.
The solution to this is trivial: Just tell dpkg to configure everything
after we have (supposely) configured everything already. In the worst
case this means dpkg will have to run a bunch of triggers, usually it
will just do nothing though.
The code to make this happen was already available, so we just flip a
config option here to cause it to be run. This way we can keep
pretending that triggers are an implementation detail of dpkg.
--triggers-only would supposely work as well, but --configure is more
robust in regards to future changes to dpkg and something we will
hopefully make use of in future versions anyway (as it was planed at the
time this and related options were implemented).
Note that dpkg currently has a workaround implemented to allow upgrades
to jessie to be clean, so that the test works before and after. Also
note that test (compared to the one in the bug) drops the await test as
its is considered a loop by dpkg now.
Closes: 769609
|
|
If we have no controlling terminal opening a terminal will make this
terminal our controller, which is a serious problem if this happens to
be the pseudo terminal we created to run dpkg in as we will close this
terminal at the end hanging ourself up in the process…
The offending open is the one we do to have at least one slave fd open
all the time, but for good measure, we apply the flag also to the slave
fd opening in the child process as we set the controlling terminal
explicitely here.
This is a regression from 150bdc9ca5d656f9fba94d37c5f4f183b02bd746 with
the slight twist that this usecase was silently broken before in that it
wasn't logging the output in term.log (as a pseudo terminal wasn't
created).
Closes: 772641
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
(The tests and their binary helpers had to be slightly modified to
apply, but the patch to fix the issue itself is unchanged.)
Closes: 768797
|
|
apt-key given a long keyid reports just "OK" all the time, but doesn't
delete the mentioned key as it doesn't find the key.
Note: In debian/experimental this was closed with
29f1b977100aeb6d6ebd38923eeb7a623e264ffe which just added the testcase
as the rewrite of apt-key had fixed this as well.
Closes: 754436
|
|
We run dpkg on its own pty, so we can log its output and have our own
output around it (like the progress bar), while also allowing debconf
and configfile prompts to happen.
In commit 223ae57d468fdcac451209a095047a07a5698212 we changed to
constantly reopening the slave for kfreebsd. This has the sideeffect
though that in some cases slave and master will lose their connection on
linux, so that no output is passed along anymore. We fix this by having
always an fd referencing the slave open (linux), but we don't use it
(kfreebsd).
Failing to get our PTY up and running has many (bad) consequences
including (not limited to, nor all at ones or in any case) garbled ouput,
no output, no logging, a (partial) mixture of the previous items, …
This commit is therefore also reshuffling quiet a bit of the creation
code to get especially the output part up and running on linux and the
logging for kfreebsd.
Note that the testcase tries to cover some cases, but this is an
interactivity issue so only interactive usage can really be a good test.
Closes: 765687
|
|
The fd moves out of scope here anyway, so we should close it properly
instead of leaking it which will tickle down to dpkg maintainer scripts.
Closes: 767774
|
|
The conversion to accept only relevant options for commands has
forgotten another one, so adding it again even through the usecase might
very well be equally good served by --print-uris.
Closes: 742578
|
|
This used to work before we implemented a stricter commandline parser
and e.g. the dd-schroot-cmd command constructs commandlines like this.
Reported-By: Helmut Grohne
|
|
Collect all hashes we can get from the source record and put them into a
HashStringList so that 'apt-get source' can use it instead of using
always the MD5sum.
We therefore also deprecate the MD5 struct member in favor of the list.
While at it, the parsing of the Files is enhanced so that records which
miss "Files" (aka MD5 checksums) are still searched for other checksums
as they include just as much data, just not with a nice and catchy name.
This is a cherry-pick of 1262d35 with some dirty tricks to preserve ABI.
LP: 1098738
|
|
APT supports more than just one HashString and even allows to enforce
the usage of a specific hash. This class is intended to help with
storage and passing around of the HashStrings.
The cherry-pick here the un-const-ification of HashType() compared to
f4c3850ea335545e297504941dc8c7a8f1c83358. The point of this commit is
adding infrastructure for the next one. All by itself, it just adds new
symbols.
Git-Dch: Ignore
|
|
Regression from merging 801745284905e7962aa77a9f37a6b4e7fcdc19d0 and
b0f4b486e6850c5f98520ccf19da71d0ed748ae4. While fine by itself, merged
the part fixing the filename is skipped if a cdrom source is
encountered, so that our list-cleanup removes what seems to be orphaned
files.
Closes: 765458
|
|
|
|
Git-Dch: Ignore
|
|
|
|
Git-Dch: Ignore
|
|
When we do a ReverifyAfterIMS() we use the copy: method to
verify the hashes again. If the user uses -o Dir=./something/relative
this fails because we use the URI class in copy.cc that strips
away the leading relative part. By not using URI this is fixed.
Closes: #762160
|
|
|
|
|
|
Do not run ReverifyAfterIMS() for local file URIs as this will
causes apt to mess around in the file:/// uri space. This is
wrong in itself, but it will also cause a incorrect verification
failure when the archive and the lists directory are on different
partitions as rename().
|
|
incorrect invalidating of unauthenticated data (CVE-2014-0488)
incorect verification of 304 reply (CVE-2014-0487)
incorrect verification of Acquire::Gzip indexes (CVE-2014-0489)
|
|
Most pagers are nice and default to running non-interactively if they
aren't connected to a terminal and we relied on that. On ci.debian.net
the configured pager is printing a header out of nowhere though, so if
we are printing to a non-terminal we call "cat" instead.
In the rework we also "remove" the dependency on sensible-utils in sofar
as we call some alternatives if calling the utils fail.
This seems to be the last problem preventing a "PASS" status on
ci.debian.net, so we close the associated bugreport.
Closes: 755040
|
|
APT treats upgrades like installs and dpkg is very similar in this, but
prints still a slightly different processing message indicating that it
is really an upgrade which we hadn't parsed so far, but this wasn't
really visible as we quickly moved on to a 'known' state.
More problematic was the reinstall case as apt hadn't recognized this
for the package name detection, so that reinstalls had no progress since
we introduced MultiArch.
|
|
Commit cbcdd3ee9d86379d1b3a44e41ae8b17dc23111d0 removes the space at the
end of the debfile name dpkg send to us and we previously had included
in the pmerror message we printed on the statusfd.
Git-Dch: Ignore
|
|
Git-Dch: Ignore
|
|
Instead of trying to inspect /proc and the fds inside we use "test -t 1"
instead as this is available and working on kfreebsd as well – not that
something breaks if we wouldn't, but we like color.
Git-Dch: Ignore
|
|
Using 'kfreebsd' here makes the test fail on a kfreebsd system
(obviously), so we just use something totally madeup in the hope that
this is less like to conflict in the future.
Git-Dch: Ignore
|
|
|
|
apt-cache search supported this since ever and in the code for apt was a
fixme indicating this should be added here as well, so here we go.
|
|
The "apt list" command was using only the pkgDepCache but not the
pkgPolicy to figure out if a package is upgradable. This lead to
incorrect display of upgradable package when the user used the
policy to pin-down packages. Thanks to Michael Musenbrock for the
initial patch.
Closes: #753297
|
|
downloadfile()
|
|
Dch-Ignore: true
|
|
|
|
When doing Acquire::http{,s}::Proxy-Auto-Detect, run the auto-detect
command for each host instead of only once. This should make using
"proxy" from libproxy-tools feasible which can then be used for PAC
style or other proxy configurations.
Closes: #759264
|
|
APT supported versioned provides for a long while in an attempt to get
it working with rpm. While this support is old, we can be relatively
sure that it works as versioned provides are used internally to make
Multi-Arch:foreign work.
Previous versions of apt will print a warning indicating that the
versioned provides is ignored, so that something which "Provides: foo (=
2)" doesn't provide anything.
Note that dpkg does allow only a equals-relation in the provides line
as anything else is deemed too complex. apt doesn't support anything
else either and such a support would require potentially big changes.
Closes: 758153
|
|
With the change of SmartConfigure() in git commit 42d51f the ordering
code was trying to re-order dependencies, even when at this point in
time this was not needed. Now it will first check all targets of the
given dependency and only if there is not a good one try to reorder
and unpack/configure as needed.
Closes: LP: #1347721
|