Age | Commit message (Collapse) | Author |
|
|
|
Bye, bye, old friend.
|
|
This was dropped in autotools as I found no use of the HAVE_PTSNAME_R
macro. Turns out it was typoed as HAVE_PTS_NAME_R. Fix the #ifdef and
add checks to CMake for it.
Closes: #833674
|
|
If we receive an interrupt, set a flag and do not abort
immediately without waiting for the child. Once the child
exited, exit with an error if the interrupted flag is set.
Closes: #832593
|
|
Introduce an initial CMake buildsystem. This build system can build
a fully working apt system without translation or documentation.
The FindBerkelyDB module is from kdelibs, with some small adjustements
to also look in db5 directories.
Initial work on this CMake build system started in 2009, and was
resumed in August 2016.
|
|
Create a temporary configuration file with a dump of our
configuration and pass that to apt-key.
LP: #1607283
|
|
|
|
Create a local exiter object which cleans up files on exit.
|
|
Previously, when data could be created and sig not, we would unlink
sig, not data (and vice versa).
|
|
Followup of b58e2c7c56b1416a343e81f9f80cb1f02c128e25.
Still a regression of sorts of 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf.
Closes: 832044
|
|
If a solver/planner exits before apt is done writing we will generate
write errors. Solvers like 'dump' can be pretty quick in failing but
produce a valid EDSP error report apt should read, parse and display
instead of just discarding even through we had write errors.
|
|
There is no point in trying to perform Write/Read on a FileFd which
already failed as they aren't going to work as expected, so we should
make sure that they fail early on and hard.
|
|
Reported-By: cppcheck
Gbp-Dch: Ignore
|
|
Simulations are frequently run by unprivileged users which naturally
don't have the permissions to write to the default location for the eipp
file. Either way is bad as running in simulation mode doesn't mean we
don't want to run the logging (as EIPP runs the same regardless of
simulation or 'real' run), but showing the warnings is relatively
pointless in the default setup, so, in case we would produce errors and
perform a simulation we will discard the warnings and carry on.
Running apt with an external planner wouldn't have generated these
messages btw.
Closes: 832614
|
|
If another file in the transaction fails and hence dooms the transaction
we can end in a situation in which a -patched file (= rred writes the
result of the patching to it) remains in the partial/ directory.
The next apt call will perform the rred patching again and write its
result again to the -patched file, but instead of starting with an empty
file as intended it will override the content previously in the file
which has the same result if the new content happens to be longer than
the old content, but if it isn't parts of the old content remain in the
file which will pass verification as the new content written to it
matches the hashes and if the entire transaction passes the file will be
moved the lists/ directory where it might or might not trigger errors
depending on if the old content which remained forms a valid file
together with the new content.
This has no real security implications as no untrusted data is involved:
The old content consists of a base file which passed verification and a
bunch of patches which all passed multiple verifications as well, so the
old content isn't controllable by an attacker and the new one isn't
either (as the new content alone passes verification). So the best an
attacker can do is letting the user run into the same issue as in the
report.
Closes: #831762
|
|
We read the entire input file we want to patch anyhow, so we can also
calculate the hash for that file and compare it with what he had
expected it to be.
Note that this isn't really a security improvement as a) the file we
patch is trusted & b) if the input is incorrect, the result will hardly be
matching, so this is just for failing slightly earlier with a more
relevant error message (althrough, in terms of rred its ignored and
complete download attempt instead).
|
|
The flush call is a no-op in most FileFd implementations so this isn't
as critical as it might sound as the only non-trivial implementation is
in the buffered writer, which tends not be used to buffer another
buffer…
|
|
APT doesn't know which packages will be triggered in the course of
actions, so it can't plan to see them for progress beforehand, but if it
sees that dpkg says that a package was triggered we can add additional
states. This is pretty much magic – after all it sets back the progress
– and there are cornercases in which this will result in incorrect
totals (package in partial states may or may not loose trigger states),
but the worst which can happen is that the progress is slightly
incorrect and doesn't reach 100%, but so be it. Better than being stuck
at 100% for a while as apt isn't realizing that a bunch of triggers
still need to be processed.
|
|
Hardcoding /var/crash means we can't test it properly and it isn't
really our style.
|
|
The progress reporting for a package sheduled for purging only included
the states dpkg passes through while actually purging the package – if
the package was fully installed before dpkg will pass first through all
remove states before purging it, so in the interest of consistent
reporting our progress reporting should do that, too.
|
|
Gbp-Dch: Ignore
|
|
The existing cleanup was happening only for packages which had a status
change (install -> uninstalled) which is the most frequent but no the
only case – you can e.g. set autobits explicitly with apt-mark.
This would leave stanzas in the states file declaring a package to be
manually installed – which is the default value for a package not listed
at all, so we can just as well drop it from the file.
|
|
We support installing ./foo.deb (and ./foo.dsc for source) for a while
now, but it can be a bit clunky to work with those directly if you e.g.
build packages locally in a 'central' build-area.
The changes files also include hashsums and can be signed, so this can
also be considered an enhancement in terms of security as a user "just"
has to verify the signature on the changes file then rather than
checking all deb files individually in these manual installation
procedures.
|
|
If a user explicitly requests the download of arch:all apt shouldn't get
in the way and perform its detection dance if arch:all packages are
(also) in arch:any files or not.
This e.g. allows setting arch=all on a source with such a field (or one
which doesn't support all at all, but has the arch:all files like Debian
itself ATM) to get only the arch:all packages from there instead of
behaving like a no-op.
Reported-By: Helmut Grohne on IRC
|
|
Moving code around into some more dedicated methods, no effective code
change itself.
Gbp-Dch: Ignore
|
|
Theoretically it should be enough to change the Dir setting and have apt
pick the dpkg/status file from that. Also, it should be consistently
effected by RootDir. Both wasn't really the case through, so a user had
to explicitly set it too (or ignore it and have or not have expected
sideeffects caused by it).
This commit tries to guess better the location of the dpkg/status file
by setting dir::state::status to a naive "../dpkg/status", just that
this setting would be interpreted as relative to the CWD and not
relative to the dir::state directory. Also, the status file isn't really
relative to the state files apt has in /var/lib/apt/ as evident if we
consider that apt/ could be a symlink to someplace else and "../dpkg"
not effected by it, so what we do here is an explicit replace on apt/
– similar to how we create directories if it ends in apt/ – with dpkg/.
As this is a change it has the potential to cause regressions in so far
as the dpkg/status file of the "host" system is no longer used if you
set a "chroot" system via the Dir setting – but that tends to be
intended and causes people to painfully figure out that they had to set
this explicitly before, so that it now works more in terms of how the
other Dir settings work (aka "as expected"). If using the host status
file is really intended it is in fact easier to set this explicitely
compared to setting the new "magic" location explicitely.
|
|
Very unlikely, but if the parent is /dev/null, the child empty and the
grandchild a value we returned /dev/null/value which doesn't exist, so
hardly a problem, but for best operability we should be consistent in
our work and return /dev/null always.
|
|
Added in dpkg in commit 90324cfa942ba23d5d44b28b1087fbd510340502.
|
|
If we don't give a specific error to report up it is likely that all
error currently in the error stack are equally important, so reporting
just one could turn out to be confusing e.g. if name resolution failed
in a SRV record list.
|
|
If we have files in partial/ from a previous invocation or similar such
those could be symlinks created by file:// sources. The code is
expecting only real files through and happily changes owner,
modification times and permission on the file the symlink points to
which tend to be files we have no business in touching in this way.
Permissions of symlinks shouldn't be changed, changing owner is usually
pointless to, but just to be sure we pick the easy way out and use
lchown, check for symlinks before chmod/utimes.
Reported-By: Mattia Rizzolo on IRC
|
|
If other logs can't be written this is a warning to,
so for consistency sake translate the errors to warnings.
|
|
Unlikely to happen in practice and I wonder more how I could miss these
in earlier reviews, but okay, lets fix it for consistency now.
|
|
If libapt has builtin support for a compression type it will create a
dummy compressor struct with the Binary set to 'false' as it will catch
these before using the generic pipe implementation which uses the
Binary. The catching happens based on configured Names through, so you
can actually force apt to use the external binaries even if it would
usually use the builtin support. That logic fails through if you don't
happen to have these external binaries installed as it will fallback to
calling 'false', which will end in confusing 'Write error's.
So, this is again something you only encounter in constructed testing.
Gbp-Dch: Ignore
|
|
This is in so far pointless as the first match will deal with the
extension, so we don't actually ever use these second instances –
probably for the better as most need arguments to behave as epected &
more importantly: the point of the exercise disabling their use for
testing proposes.
Gbp-Dch: Ignore
|
|
All apt versions support numeric as well as 3-character timezones just
fine and its actually hard to write code which doesn't "accidently"
accepts it. So why change? Documenting the Date/Valid-Until fields in
the Release file is easy to do in terms of referencing the
datetime format used e.g. in the Debian changelogs (policy §4.4). This
format specifies only the numeric timezones through, not the nowadays
obsolete 3-character ones, so in the interest of least surprise we should
use the same format even through it carries a small risk of regression
in other clients (which encounter repositories created with
apt-ftparchive).
In case it is really regressing in practice, the hidden option
-o APT::FTPArchive::Release::NumericTimezone=0
can be used to go back to good old UTC as timezone.
The EDSP and EIPP protocols use this 'new' format, the text interface
used to communicate with the acquire methods does not for compatibility
reasons even if none of our methods would be effected and I doubt any
other would (in these instances the timezone is 'GMT' as that is what
HTTP/1.1 requires). Note that this is only true for apt talking to
methods, (libapt-based) methods talking to apt will respond with the
'new' format. It is therefore strongly adviced to support both also in
method input.
|
|
As the volatile sources are parsed last they were sorted behind the
dpkg/status file and hence are treated as a downgrade, which isn't
really what you want to happen as from a user POV its an upgrade.
|
|
If we have a (e.g. locally built) deb file installed and do try to
install it again apt complained about this being a downgrade, but it
wasn't as it is the very same version… it was just confused into not
merging the versions together which looks like a downgrade then.
The same size assumption is usually good, but given that volatile files
are parsed last (even after the status file) the base assumption no
longer holds, but is easy to adept without actually changing anything in
practice.
|
|
Traditionally all providers are protected providing something as apt
can't know which of them is actually really providing the functionality
for the user ensuring that we don't propose the removal of used stuff,
but that is of course also keeping stuff around which could be removed.
That can cause the collection of multiple old providers until the
provided package is itself no longer needed (e.g. out-of-tree kernel
modules). We combat this by marking providers only from the newest
source package version so that old providers built by older versions of
the same source package can be garbage collected.
|
|
As the previous commit, this shouldn't change behavior at all, but
beside being more explicit and perhaps faster its also considerably
shorter (granted, mostly by if0-block elimination).
Gbp-Dch: Ignore
|
|
Piling everything in a single if statement always made my head wobble,
but it hasn't even a benefit as the most common case of a package which
isn't installed passes all of the old if and lands in the non-existent
else-part of the inner if. So beside a subjective cleanup of what goes
on this implementation should also be a bit faster.
No change in behavior should be present.
Gbp-Dch: Ignore
|
|
Writing first means that even in the event of a power-failure the
autobit is saved for future processing instead of "forgotten" so that
the package is treated as manually installed.
In some cases we have to re-run the writing after dpkg is done through
as dpkg can let packages disappear and in such cases apt will move
autobits around (or in that case non-autobits) which we need to store.
|
|
If we can't read the old file we can't just move forward as that would
discard potentially discard old data (especially other fields). We let
it fail only after we are done writing the new file so a user has the
chance to look into and merge the new data (which is otherwise
discarded).
|
|
We deploy atomic renames for some files, but these renames also happen
if something about the file failed which isn't really the point of the
exercise…
Closes: 828908
|
|
This reverts commit 2b8221d66a8284042fc53c7bbb14bb9750e9137f.
Avoiding the use of GCC >= 5 stuff lets use go back to 4.8 simplifying
the travis setup again as well as reducing the backport requirements in
general.
This is possible because the std::get_time use requiring GCC >= 5 in
9febc2b238e1e322dce1f94ecbed46d595893b52 was replaced by handrolling it
in 1d742e01470bba27715a8191c50adde4b39c2f19, so the remaining uses are
just small conviniences we can do without.
Gbp-Dch: Ignore
|
|
Usually these config options are set to sensible values, but if init
isn't run or the user interferes with configuration clearing or similar
the options could indeed carry an empty value, which will result in
FindDir returning a '/'. That feels kinda wrong, but as a public
interface there isn't much we can do about it and instead make it so
that we get the special file /dev/null back we know how to deal with in
such cases.
|
|
Julian noticed on IRC that I fall victim to a lovely false friend by
calling referring to a 'planer' all the time even through these are
machines to e.g. remove splinters from woodwork ("make stuff plane").
The term I meant is written in german in this way (= with a single n)
but in english there are two, aka: 'planner'.
As that is unreleased code switching all instances without any
transitional provisions. Also the reason why its skipped in changelog.
Thanks: Julian Andres Klode
Gbp-Dch: Ignore
|
|
Needed for the previous change
|
|
If a package file is formatted in a way that that no space
follows a deprecated "<", we would reformat it to "<=" and
increase the length of the output by 1, which can break.
Under normal circumstances with "<=" this should not be an
issue.
Closes: #828812
|
|
In 385d9f2f23057bc5808b5e013e77ba16d1c94da4 I implemented the storage of
scenario files based on enabling this by default for EIPP, but I
implemented it first optionally for EDSP to have it independent.
The reasons mentioned in the earlier commit (debugging and bugreports)
obviously apply here, especially as EIPP solutions aren't user approved,
nearly impossible to verify before starting the execution and at the
time of error the scenario has changed already, so that reproducing the
issue becomes hard(er).
|
|
Freeing 'Install' for future use as an interface for "dpkg --install",
which is currently not used by any existent planer, so the
implementation of it itself will be delayed until then.
|