Age | Commit message (Collapse) | Author |
|
Selecting targets based on the Release they belong to isn't to
unrealistic. In fact, it is assumed to be the most used case so it is
made the default especially as this allows to bundle another thing we
have to be careful with: Filenames and only showing targets we have
acquired.
Closes: 752702
|
|
We used to read the Release file for each Packages file and store the
data in the PackageFile struct even through potentially many Packages
(and Translation-*) files could use the same data. The point of the
exercise isn't the duplicated data through. Having the Release files as
first-class citizens in the Cache allows us to properly track their
state as well as allows us to use the information also for files which
aren't in the cache, but where we know to which Release file they
belong (Sources are an example for this).
This modifies the pkgCache structs, especially the PackagesFile struct
which depending on how libapt users access the data in these structs can
mean huge breakage or no visible change. As a single data point:
aptitude seems to be fine with this. Even if there is breakage it is
trivial to fix in a backportable way while avoiding breakage for
everyone would be a huge pain for us.
Note that not all PackageFile structs have a corresponding ReleaseFile.
In particular the dpkg/status file as well as *.deb files have not. As
these have only a Archive property need, the Component property takes
over this duty and the ReleaseFile remains zero. This is also the reason
why it isn't needed nor particularily recommended to change from
PackagesFile to ReleaseFile blindly. Sticking with the earlier is
usually the better option.
|
|
Downloading additional files is only half the job. We still need a way
to allow external tools to know where the files are they requested for
download given that we don't want them to choose their own location.
'apt-get files' is our answer to this showing by default in a deb822
format information about each IndexTarget with the potential to filter
the records based on lines and an option to change the output format.
The command serves also as an example on how to get to this information
via libapt.
|
|
While it is mostly busywork to rewrite all instances it actually fixes
bugs as the data storage used by the new method is std::string rather
than a char*, the later mostly created by c_str() from a std::string
which the caller has to ensure keeps in scope – something apt-ftparchive
actually didn't ensure and relied on copy-on-write behavior instead
which c++11 forbids and hence the new default gcc abi doesn't use it.
|
|
The helper expects to be told if it should generate messages, not where
these messages should be printed – as it isn't printing such messages,
but puts them in _error. apt-get uses in other methods a helper
specialisation which does also print stuff to a stream through, so this
is likely a copy&paste error.
Git-Dch: Ignore
|
|
Conflicts:
apt-pkg/acquire-item.cc
cmdline/apt-key.in
methods/https.cc
test/integration/test-apt-key
test/integration/test-multiarch-foreign
|
|
This isn't testing much of the 'complex' parts,
but its better than nothing for now.
Git-Dch: Ignore
|
|
gnupg is case-insensitive about keyids, so back then apt-key called it
directly any keyid was accepted, but now that we work more with the
keyid ourself we regressed to require uppercase keyids by accident.
This is also inconsistent with other apt-key commands which still use
gnupg directly. A single case-insensitive grep and we are fine again.
Closes: 781696
|
|
As part of the “reproducible builds” effort [1], we have noticed that
apt could not be built reproducibly.
One issue is that it uses the __DATE__ and __TIME__ macros of the C
preprocessor to display the time of build in the online help. We believe
this information not to be really useful to users as they can always
look at the package data and metadata to figure it out.
The attached patch simply removes this information. All
non-documentation packages can then be built reproducibly with our
current experimental framework.
[David: changed the string slightly to be untranslateable as well]
Closes: 774342
|
|
cppcheck reports this error, its not really a problem for us as the API
can actually deal with it via implicit conversion, but being explicit
can't hurt and the less reported errors the better.
Git-Dch: Ignore
|
|
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
(The tests and their binary helpers had to be slightly modified to
apply, but the patch to fix the issue itself is unchanged.)
Closes: 768797
|
|
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
Closes: 768797
|
|
apt-key given a long keyid reports just "OK" all the time, but doesn't
delete the mentioned key as it doesn't find the key.
Note: In debian/experimental this was closed with
29f1b977100aeb6d6ebd38923eeb7a623e264ffe which just added the testcase
as the rewrite of apt-key had fixed this as well.
Closes: 754436
|
|
Only "recent" versions of dpkg support stdin for merge instead of a
file, so as a quick fix we delay calling it until we really need it
which fixes most of the problem already.
Checking for a specific dpkg version here is deemed too much work, just
like using a temporary file here and depends a too high requirement for
this minor usecase. After all, it didn't work at all before, so we break
nobody here and can fix it if someone complains (with a patch).
|
|
Usually they don't provide a lot in terms of what they test, but they
help in covering many lines from strictly anecdotal commands (stats,
moo) and error messages, so that stuff which really needs to be tested,
but isn't is better visible in coverage reports.
Git-Dch: Ignore
|
|
Collect all hashes we can get from the source record and put them into a
HashStringList so that 'apt-get source' can use it instead of using
always the MD5sum.
We therefore also deprecate the MD5 struct member in favor of the list.
While at it, the parsing of the Files is enhanced so that records which
miss "Files" (aka MD5 checksums) are still searched for other checksums
as they include just as much data, just not with a nice and catchy name.
This is a cherry-pick of 1262d35 with some dirty tricks to preserve ABI.
LP: 1098738
|
|
Do the same with less code in apt-get. This especially ensures that the
lock file (and the parent directories) exist before we are trying to
lock. It also means that clean now creates the directories if they are
missing so we returned to a proper clean state now.
Git-Dch: Ignore
|
|
dpkg wants to know about a package before it can be put on hold, so we
have to at least hint about its existance in the available file it
"maintaince" to know about such stuff. The simple thing would probably
be to just feed all Packages files into dpkg as well, but what would be
the point really? Exactly, so we take a shortcut here and just create
dummies in the available file if we need to which isn't going to be that
common as usually you are holding packages back and not off.
Who would have thought that a simple feature like setting a package on
hold requires more than 200 lines of code… at least with the testcase it
is now explicitly tested code.
|
|
Git-Dch: Ignore
|
|
By convention, if I run a tool with --help or --version I expect it to
exit successfully with the usage, while if I do call it wrong (like
without any parameters) I expect the usage message shown with a non-zero
exit.
|
|
The change itself is no problem ABI wise, but the remove of the old
undynamic hashtables is, so we bring it back for older abis and happily
use the now available free space to backport more recent additions like
the dynamic hashtable itself.
Git-Dch: Ignore
|
|
Git-Dch: Ignore
|
|
We can't add a new virtual method without breaking the ABI, but we can
freely add new methods, so for older ABIs we just implement this method
with a dynamic_cast, so that clients can be more ignorant about the API
here and especially don't need to pull a very dirty trick by assuming
internal knowledge (like apt-get did here).
|
|
partial files are chowned by the Item baseclass to let the methods work
with them. Now, this baseclass is also responsible for chowning the
files back to root instead of having various deeper levels do this.
The consequence is that all overloaded Failed() methods now call the
Item::Failed base as their first step. The same is done for Done().
The effect is that even in partial files usually don't belong to
_apt anymore, helping sneakernets and reducing possibilities of a bad
method modifying files not belonging to them.
The change is supported by the framework not only supporting being run
as root, but with proper permission management, too, so that privilege
dropping can be tested with them.
|
|
Private temporary directories as created by e.g. libpam-tmpdir are nice,
but they are also very effective in preventing our priviledge dropping
to work as TMPDIR will be set to a directory only root has access to, so
working with it as _apt will fail. We circumvent this by extending our
check for a usable TMPDIR setting by checking access rights.
Closes: 765951
|
|
We are checking the space requirements for ages, but the check uses the
free blocks count, which includes the blocks reserved for usage by root.
Now that we use an unprivileged user it has no access to these blocks
anymore – and more importantly these blocks are a reserve, they
shouldn't be used by apt without special encouragement by the user as it
would be bad to have dpkg run out of diskspace and maintainerscripts
like man-db skip certain actions if not enough space is available
freely.
|
|
Privilege dropping breaks download/source/changelog commands as they
require the _apt user to have write permissions in the current directory,
which is e.g. the case in /tmp, but not in /root, so we disable the
privilege dropping if we deal with such a directory based on idea and
code by Michael Vogt.
The alternative would be to download always to a temp directory and move
it then done, but this breaks partial file support. To resolve this, we
could move to one of our partial/ directories, but this would require a
lock which would block root from using two of these commands in
parallel. As both seems unacceptable we instead let the user choose what
to do: Either a directory is setupped for _apt, downloading as root is
accepted or – which is potentially even better – an unprivileged user is
used for the commands.
|
|
|
|
feature/acq-trans
Conflicts:
apt-pkg/acquire-item.cc
|
|
Git-Dch: ignore
|
|
Reworks the API involved in creating and setting up the fetcher to be a
bit more pleasent to look at and work with as e.g. an empty string for
no lock isn't very nice. With the lock we can also stop creating all our
partial directories "just in case". This way we can also be a bit more
aggressive with the partial directory itself as with a lock, we know we
will gone need it.
|
|
The code is creating a secure temporary directory, but then creates
the changelog alongside the tmpdir in the same base directory. This
defeats the secure tmpdir creation, making the filename predictable.
Inject a '/' between the tmpdir and the changelog filename.
|
|
The code is creating a secure temporary directory, but then creates
the changelog alongside the tmpdir in the same base directory. This
defeats the secure tmpdir creation, making the filename predictable.
Inject a '/' between the tmpdir and the changelog filename.
|
|
This prevents a failure in mktemp -d - it will blindly trust
TMPDIR and not use something else if the dir is not there.
|
|
Conflicts:
apt-pkg/acquire-item.cc
|
|
Not really the intended usecase for apt-get clean, but users expect it
to help them in recovery and it can't really hurt as this directory
should be empty if everything was fine and proper anyway.
Closes: #762889
|
|
apt-get download and changelog as well as apt-helper reuse the acquire
system for their own proposes without requiring the directories the
fetcher wants to create, which is a problem if you run them as non-root
and the directories do not exist as it greets you with:
E: Archives directory /var/cache/apt/archives/partial is missing. -
Acquire (13: Permission denied)
Closes: 762898
|
|
Accessing the package records to acquire this information is pretty
costly, so that information wasn't used so far in many places. The most
noticeable user by far is EDSP at the moment, but there are ideas to
change that which this commit tries to enable.
|
|
gnupg/gnupg2 can do verify just fine of course, so we don't need to use
gpgv here, but it is what we always used in the past, so there might be
scripts expecting a certain output and more importantly the output of
apt-cdrom contains messages from gpg and even with all the settings we
activate to prevent it, it still shows (in some versions) a quiet scary:
"gpg: WARNING: Using untrusted key!" message. Keeping the use of gpgv is
the simplest way to prevent it.
We are increasing also the "Breaks: apt" version from libapt as it
requires a newer apt-key than might be installed in partial upgrades.
|
|
Git-Dch: Ignore
|
|
Some advanced commands can be executed without the keyring being
modified like --verify, so this adds an option to disable the mergeback
and uses it for our gpg calling code.
Git-Dch: Ignore
|
|
We were down to at most two keyrings before, but gnupg upstream plans
dropping support for multiple keyrings in the longrun, so with a
single keyring we hope to be future proof – and 'apt-key adv' isn't a
problem anymore as every change to the keys is merged back, so we have
now the same behavior as before, but support an unlimited amount of
trusted.gpg.d keyrings.
|
|
For some advanced usecases it might be handy to specify the secret
keyring to be used (e.g. as it is used in the testcases), but specifying
it via a normal option for gnupg might not be available forever:
http://lists.gnupg.org/pipermail/gnupg-users/2013-August/047180.html
Git-Dch: Ignore
|
|
|
|
Git-Dch: Ignore
|
|
Git-Dch: Ignore
|
|
If both are available APT will still prefer gpg over gpg2 as it is a bit
more lightweight, but it shouldn't be a problem to use one or the other
(at least at the moment, who knows what will happen in the future).
|
|
'apt-key help' and incorrect usage do not need a functioning gnupg
setup, as well as we shouldn't try to setup gnupg before we actually
test if it is available (and print a message if it is not).
|
|
gnupg has a hardlimit of 40 (at the moment) keyrings per invocation,
which can be exceeded with (many) repositories. That is rather
misfortune as the longrun goal was to drop gnupg dependency at some
point in the future, but this can now be considered missed and dropped.
It also means that 'apt-key adv' commands might not have the behaviour
one would expect it to have as it mainly operates on a big temporary
keyring, so commands modifying keys will break. Doing this was never a
good idea anyway through, so lets just hope nothing break too badly.
Closes: 733028
|