Age | Commit message (Collapse) | Author |
|
We use clock() as a very cheap way of getting a "random" value, but the
manpage warns that this could return -1, so we should be dealing with
this. Additionally, e.g. on hurd-i386 the value increases only slowly –
to slow for our fast running tests for randomness hence producing the
same range in both samples, so we introduce a simple busy-wait loop (as
clock is counting processor time used by the program) in the test which
delays the second sample just enough making our randomness a bit more
predictable.
|
|
Comparing floating numbers is always fun and in this instance a 9 < 9.0
is "somehow" true on hurd-i386 letting the tests fail by reporting that
too much progress achieved. A bit mysterious, but with some rework we
can use code which avoids dealing with the floats in this way entirely
and make our testcases happy.
|
|
|
|
|
|
With b4450f1dd6bca537e60406b2383ab154a3e1485f we dropped what we
calculated here later on and now that we don't need it in the meantime
either we can just skip the busy work by default and expect dpkg to do
the right thing dropping also our little "last explicit configures"
removal trick introduced in b4450f1dd6bca537e60406b2383ab154a3e1485f.
This enables the last of a bunch of previously experimental options,
some of them existing still, but are very special and hence not really
worth documenting anymore (especially as it would need to be rewritten
now entirely) which is why the documentation is nearly completely
dropped.
The order of configuration stanzas in the simulation code changes
slightly as it isn't concerning itself with finding the 'right' order,
but any order is valid anyhow as long as the entire set happens in the
same call.
|
|
If a planner lets actions to be figured out by dpkg in pending calls
these actions aren't mentioned in a simulation. While that might be
a good thing for debugging, it would be a change in behavior and
especially if a planner avoids explicit removals could be confusing for
users. As such we perform the same 'trick' as in the dpkg implementation
by performing explicitly what would be done by the pending calls.
To save us some work and avoid desyncs we perform a layer violation by
using deb/ code in the generic simulation – and further we perform ugly
dynamic_cast to avoid breaking the ABI for nothing; aptitude is the only
other user of the simulation class according to codesearch.d.n and for
that our little trick works. It just isn't working if you happen to
extend pkgSimulate or otherwise manage to call the protected Go methods
directly – which isn't very realistic/practical.
|
|
The user has to approve the removal of a crossgraded package as it might
be needed to remove it (temporarily) in the process, but in most cases
we can happily avoid it and let dpkg unpack over it skipping the
remove. This has some effects on progress reporting and how deal with
selections through which makes this a tiny bit complicated.
|
|
If the URI had no password the username was ignored
|
|
To prevent accidents like adding http-sources while using tor+http it
can make sense to allow disabling methods. It might even make sense to
allow "redirections" and adding "symlinked" methods via configuration.
This could e.g. allow using different options for certain sources by
adding and configuring a "virtual" new method which picks up the config
based on the name it was called with like e.g. http does if called as
tor+http.
|
|
Socks support is a requested feature in sofar that the internet is
actually believing Acquire::socks::Proxy would exist. It doesn't and
this commit isn't adding it as that isn't how our configuration works,
but it allows Acquire::http::Proxy="socks5h://…". The HTTPS method was
changed already to support socks proxies (all versions) via curl. This
commit implements only SOCKS5 (RFC1928) with no auth or pass&user auth
(RFC1929), but not GSSAPI which is required by the RFC. The 'h' in the
protocol name further indicates that DNS resolution is delegated to the
socks proxy rather than performed locally.
The implementation works and was tested with Tor as socks proxy for
which implementing socks5h only can actually be considered a feature.
Closes: 744934
|
|
Having the detection handled in specific (http) workers means that a
redirection loop over different hostnames isn't detected. Its also not a
good idea have this implement in each method independently even if it
would work
|
|
apt-transports not shipped in apt directly are usually named
apt-transport-% with % being what is in the name of the transport.
tor additional introduced aliases via %+something, which isn't a bad
idea, so be strip the +something part from the method name before
suggesting the installation of an apt-transport-% package.
This avoids us the maintainance of a list of existing transports
creating a two class system of known and unknown transports which would
be quite arbitrary and is unfriendly to backports.
|
|
Same reason and implementation as for configure.
|
|
A planner might not explicitly configure all packages, but we need to
know all packages which will be configured for progress reporting and to
tell the hook scripts about them as they rely on this for their own
functionality.
|
|
If we want a package to be purged from the system tell dpkg in the
ordering (if it has to touch it explicitly) to remove it and cover the
purging of the config files at the end with a --purge --pending call.
That should help packages move conffiles around between packages
correctly even if the user is purging packages directly in big actions
like dist-upgrades involving many packages.
|
|
Implemented a long while ago now with relatively good progress reporting
involving triggers is a good time to try delaying the execution of
triggers across dpkg invocations finally by default.
Note: The bugreport talks also about 'smarter' configuration which is a
much bigger part and approached from multiple directions, but doesn't
really involve triggers per-se so considering it decoupled should help
in getting it done…
Closes: #626599
|
|
Telling dpkg early on that we are going to remove these packages later
helps it with auto-deconfiguration decisions and its another area where
a planner can ignore the nitty gritty details and let dpkg decide the
course of action if there are no special requirements.
|
|
dpkg decides certain things on its own based on selections and
especially if we want to call --pending on purge/remove actions, we need
to ensure a clean slate or otherwise we surprise the user by removing
packages we weren't allowed to remove by the user in this run (the
selection might be an overarching plan for the not-yet "future").
Ideally dpkg would have some kind of temporal selection interface for
this case, but it hasn't, so we make it temporal with the risk of
loosing state if we don't manage to restore them.
|
|
Having long commandlines split into two is a huge problem if it happens
and additionally if we want to introduce planners which perform less
micromanagment its a good idea to leave the details for dpkg to decide.
In practice this doesn't work yet unconditionally as a bug is hiding in
the ordering code of dpkg, but it works if apt imposes its ordering so
this commit allows for now at least to solve the first problem.
|
|
APT (usually) knows which package is essential or not, so we can avoid
passing this force flag to dpkg unconditionally if the user hasn't
chosen a non-default essential handling obscuring the information.
|
|
|
|
Bye, bye, old friend.
|
|
This was dropped in autotools as I found no use of the HAVE_PTSNAME_R
macro. Turns out it was typoed as HAVE_PTS_NAME_R. Fix the #ifdef and
add checks to CMake for it.
Closes: #833674
|
|
If we receive an interrupt, set a flag and do not abort
immediately without waiting for the child. Once the child
exited, exit with an error if the interrupted flag is set.
Closes: #832593
|
|
Introduce an initial CMake buildsystem. This build system can build
a fully working apt system without translation or documentation.
The FindBerkelyDB module is from kdelibs, with some small adjustements
to also look in db5 directories.
Initial work on this CMake build system started in 2009, and was
resumed in August 2016.
|
|
Create a temporary configuration file with a dump of our
configuration and pass that to apt-key.
LP: #1607283
|
|
|
|
Create a local exiter object which cleans up files on exit.
|
|
Previously, when data could be created and sig not, we would unlink
sig, not data (and vice versa).
|
|
Followup of b58e2c7c56b1416a343e81f9f80cb1f02c128e25.
Still a regression of sorts of 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf.
Closes: 832044
|
|
If a solver/planner exits before apt is done writing we will generate
write errors. Solvers like 'dump' can be pretty quick in failing but
produce a valid EDSP error report apt should read, parse and display
instead of just discarding even through we had write errors.
|
|
There is no point in trying to perform Write/Read on a FileFd which
already failed as they aren't going to work as expected, so we should
make sure that they fail early on and hard.
|
|
Reported-By: cppcheck
Gbp-Dch: Ignore
|
|
Simulations are frequently run by unprivileged users which naturally
don't have the permissions to write to the default location for the eipp
file. Either way is bad as running in simulation mode doesn't mean we
don't want to run the logging (as EIPP runs the same regardless of
simulation or 'real' run), but showing the warnings is relatively
pointless in the default setup, so, in case we would produce errors and
perform a simulation we will discard the warnings and carry on.
Running apt with an external planner wouldn't have generated these
messages btw.
Closes: 832614
|
|
If another file in the transaction fails and hence dooms the transaction
we can end in a situation in which a -patched file (= rred writes the
result of the patching to it) remains in the partial/ directory.
The next apt call will perform the rred patching again and write its
result again to the -patched file, but instead of starting with an empty
file as intended it will override the content previously in the file
which has the same result if the new content happens to be longer than
the old content, but if it isn't parts of the old content remain in the
file which will pass verification as the new content written to it
matches the hashes and if the entire transaction passes the file will be
moved the lists/ directory where it might or might not trigger errors
depending on if the old content which remained forms a valid file
together with the new content.
This has no real security implications as no untrusted data is involved:
The old content consists of a base file which passed verification and a
bunch of patches which all passed multiple verifications as well, so the
old content isn't controllable by an attacker and the new one isn't
either (as the new content alone passes verification). So the best an
attacker can do is letting the user run into the same issue as in the
report.
Closes: #831762
|
|
We read the entire input file we want to patch anyhow, so we can also
calculate the hash for that file and compare it with what he had
expected it to be.
Note that this isn't really a security improvement as a) the file we
patch is trusted & b) if the input is incorrect, the result will hardly be
matching, so this is just for failing slightly earlier with a more
relevant error message (althrough, in terms of rred its ignored and
complete download attempt instead).
|
|
The flush call is a no-op in most FileFd implementations so this isn't
as critical as it might sound as the only non-trivial implementation is
in the buffered writer, which tends not be used to buffer another
buffer…
|
|
APT doesn't know which packages will be triggered in the course of
actions, so it can't plan to see them for progress beforehand, but if it
sees that dpkg says that a package was triggered we can add additional
states. This is pretty much magic – after all it sets back the progress
– and there are cornercases in which this will result in incorrect
totals (package in partial states may or may not loose trigger states),
but the worst which can happen is that the progress is slightly
incorrect and doesn't reach 100%, but so be it. Better than being stuck
at 100% for a while as apt isn't realizing that a bunch of triggers
still need to be processed.
|
|
Hardcoding /var/crash means we can't test it properly and it isn't
really our style.
|
|
The progress reporting for a package sheduled for purging only included
the states dpkg passes through while actually purging the package – if
the package was fully installed before dpkg will pass first through all
remove states before purging it, so in the interest of consistent
reporting our progress reporting should do that, too.
|
|
Gbp-Dch: Ignore
|
|
The existing cleanup was happening only for packages which had a status
change (install -> uninstalled) which is the most frequent but no the
only case – you can e.g. set autobits explicitly with apt-mark.
This would leave stanzas in the states file declaring a package to be
manually installed – which is the default value for a package not listed
at all, so we can just as well drop it from the file.
|
|
We support installing ./foo.deb (and ./foo.dsc for source) for a while
now, but it can be a bit clunky to work with those directly if you e.g.
build packages locally in a 'central' build-area.
The changes files also include hashsums and can be signed, so this can
also be considered an enhancement in terms of security as a user "just"
has to verify the signature on the changes file then rather than
checking all deb files individually in these manual installation
procedures.
|
|
If a user explicitly requests the download of arch:all apt shouldn't get
in the way and perform its detection dance if arch:all packages are
(also) in arch:any files or not.
This e.g. allows setting arch=all on a source with such a field (or one
which doesn't support all at all, but has the arch:all files like Debian
itself ATM) to get only the arch:all packages from there instead of
behaving like a no-op.
Reported-By: Helmut Grohne on IRC
|
|
Moving code around into some more dedicated methods, no effective code
change itself.
Gbp-Dch: Ignore
|
|
Theoretically it should be enough to change the Dir setting and have apt
pick the dpkg/status file from that. Also, it should be consistently
effected by RootDir. Both wasn't really the case through, so a user had
to explicitly set it too (or ignore it and have or not have expected
sideeffects caused by it).
This commit tries to guess better the location of the dpkg/status file
by setting dir::state::status to a naive "../dpkg/status", just that
this setting would be interpreted as relative to the CWD and not
relative to the dir::state directory. Also, the status file isn't really
relative to the state files apt has in /var/lib/apt/ as evident if we
consider that apt/ could be a symlink to someplace else and "../dpkg"
not effected by it, so what we do here is an explicit replace on apt/
– similar to how we create directories if it ends in apt/ – with dpkg/.
As this is a change it has the potential to cause regressions in so far
as the dpkg/status file of the "host" system is no longer used if you
set a "chroot" system via the Dir setting – but that tends to be
intended and causes people to painfully figure out that they had to set
this explicitly before, so that it now works more in terms of how the
other Dir settings work (aka "as expected"). If using the host status
file is really intended it is in fact easier to set this explicitely
compared to setting the new "magic" location explicitely.
|
|
Very unlikely, but if the parent is /dev/null, the child empty and the
grandchild a value we returned /dev/null/value which doesn't exist, so
hardly a problem, but for best operability we should be consistent in
our work and return /dev/null always.
|
|
Added in dpkg in commit 90324cfa942ba23d5d44b28b1087fbd510340502.
|
|
If we don't give a specific error to report up it is likely that all
error currently in the error stack are equally important, so reporting
just one could turn out to be confusing e.g. if name resolution failed
in a SRV record list.
|
|
If we have files in partial/ from a previous invocation or similar such
those could be symlinks created by file:// sources. The code is
expecting only real files through and happily changes owner,
modification times and permission on the file the symlink points to
which tend to be files we have no business in touching in this way.
Permissions of symlinks shouldn't be changed, changing owner is usually
pointless to, but just to be sure we pick the easy way out and use
lchown, check for symlinks before chmod/utimes.
Reported-By: Mattia Rizzolo on IRC
|