Age | Commit message (Collapse) | Author |
|
Improve-Upon: 2e2865ae53a65c00dd55a892d5b48458f3110366
Reported-By: Julian Andres Klode
Gbp-Dch: Ignore
|
|
The bugreport shows a segfault caused by the code not doing the correct
magical dance to remove an item from inside a queue in all cases. We
could try hard to fix this, but it is actually better and also easier to
perform these checks (which cause instant failure) earlier so that they
haven't entered queue(s) yet, which in return makes cleanup trivial.
The result is that we actually end up failing "too early" as if we
wouldn't be careful download errors would be logged before that process
was even started. Not a problem for the acquire system, but likely to
confuse users and programs alike if they see the download process
producing errors before apt was technically allowed to do an acquire
(it didn't, so no violation, but it looks like it to the untrained eye).
Closes: 835195
|
|
We basically called ourselves before, creating an endless loop.
Reported-By: clang
|
|
This time it is the formatting of floating numbers in progress
reporting with a radix charater potentially not being dot.
Followup of 7303e11ff28f920a6277c159aa46f80c007350bb. Regression of
b58e2c7c56b1416a343e81f9f80cb1f02c128e25 in so far as it exchanging
very effected with slightly less effected code.
LP: 1611010
|
|
Commit 7ec343309b7bc6001b465c870609b3c570026149 got us most of the way,
but the last mile was botched by having the pending calls in the wrong
order as this way we potentially 'force' dpkg to remove/purge a package
it doesn't want to as another package still depends on it and the
replacement isn't fully installed yet.
So what we do now is a configure before remove and purge (all with
--no-triggers) and finishing off with another configure pending call to
take care of the triggers.
Note that in the bugreport example our current planner is forcing dpkg
to remove the package earlier via --force-depends which we could do for
the pending calls as well and could be used as a workaround, but we want
to do less forcing eventually.
Closes: 835094
|
|
This fixes some actual bugs for PROJECT and BZIP2_INCLUDE_DIR.
Gbp-Dch: ignore
|
|
Instead of erroring out when receiving a SIGINT, let the
child deal with it - we'll error out anyway if the child
exits with an error or due to the signal. Also ignore
SIGQUIT, as system() ignores it.
This basically fixes Bug #832593, but: we are running
the hooks via sh -c. Some shells exit with a signal
error even if the command they are executing catches
the signal and exits successfully. So far, this has
been noticed on dash, which unfortunately, is our
default shell.
Example:
$ cat trap.sh
trap 'echo int' INT; sleep 10; exit 0
$ if dash -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
FAIL: 130
$ if mksh -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
OK: 0
$ if bash -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
OK: 0
|
|
Reported-By: Mattia Rizzolo <mattia@debian.org> in #834629
|
|
We support "./foobar.deb" as a way to install a deb file directly.
Recently .changes files were added. This highlights a problem as you
can't add the changes file without also trying to install all of them.
Now, it could also be handy to add entire Packages/Sources files to
perhaps get a bunch of packages in without installing them all
implicitly.
This commit introduces --with-source which allows to add *.deb, *.changes,
*.dsc, source-dirs, Packages & Sources files (the later can also be
compressed) without also installing them.
|
|
Seen in cme #833656 if Dir isn't set (yet) we end up later absoluting a
path which was supposed to be absolute already, so if Dir is empty we
assume it to be '/' instead. In practice this is a bug in the software
using libapt, but for maxium compatibility lets explicitly set the
default value here to be safe.
Reported-By: Paul Wise <pabs@debian.org>
Inspired-By: Brendan O'Dea <bod@debian.org>
Fixes-Regression: 475f75506db48a7fa90711fce4ed129f6a14cc9a
Shadows-Bug: #833656
|
|
In af81ab9030229b4ce6cbe28f0f0831d4896fda01 by-hash got implemented as a
special compression type for our usual index files like Packages.
Missing in this scheme was the special .diff/Index index file containing
the info about individual patches for this index file. Deriving from the
index file class directly we inherent the compression handling
infrastructure and in this way also by-hash nearly for free.
Closes: #824926
|
|
The URI we later want to modify to get the file via by-hash was unset in
case a file was only available uncompressed (which is usually not the
case) causing an acquire error.
|
|
In af81ab9030229b4ce6cbe28f0f0831d4896fda01 we implement by-hash as a
special compression type, which breaks this filesize setting as the code
is looking for a foobar.by-hash file then. Dealing this slightly gets
us the intended value. Note that this has no direct effect as this value
will be set in other ways, too, and could only effect progress reporting.
Gbp-Dch: Ignore
|
|
If 9b8034a9fd40b4d05075fda719e61f6eb4c45678 serves the Release files
from a partial mirror we will end up getting 404 for some of the
indexes. Instead of giving up, we will instead ignore our same
redirection mirror constrain and ask the redirection service as a
potential hashsum mismatch is better than keeping the certain 404 error.
|
|
Now that we have the redirections loopchecker centrally in our items we
can use it also to prevent internal redirections to loop caused by
bugs as in a few instances we get into the business of rewriting the URI
we will query by ourself as we predict we would see such a redirect
anyway. Our code has no bugs of course, hence no practical difference. ;)
Gbp-Dch: Ignore
|
|
The failure handling frequently changes URI & Description of the failed
item to try a slightly different combination which might work, but the
logging of the failure happens only afterwards as the same failure
handling decides if this is a critical error or not so we need a backup
here instead of potentially new content.
A purely cosmetic issue, but can still be confusing for humans.
|
|
Since its existence in 2010 DirectoryExists was always marked with this
attribute, but for no real reason. Arguably a check for the existence of
the file is not modifying global state, so theoretically this shouldn't
be a problem. It is wrong from a logical point of view through as
between two calls the directory could be created so the promise we made
to the compiler that it could remove the second call would be wrong, so
API wise it is wrong.
It's a bit mysterious that this is only observeable on ppc64el and can be
fixed by reordering code ever so slightly, but in the end its more our
fault for adding this attribute than the compilers fault for doing
something silly based on the attribute.
LP: 1473674
|
|
When checking if a file is empty, we forget to check that
fstat() actually worked.
|
|
We use clock() as a very cheap way of getting a "random" value, but the
manpage warns that this could return -1, so we should be dealing with
this. Additionally, e.g. on hurd-i386 the value increases only slowly –
to slow for our fast running tests for randomness hence producing the
same range in both samples, so we introduce a simple busy-wait loop (as
clock is counting processor time used by the program) in the test which
delays the second sample just enough making our randomness a bit more
predictable.
|
|
Comparing floating numbers is always fun and in this instance a 9 < 9.0
is "somehow" true on hurd-i386 letting the tests fail by reporting that
too much progress achieved. A bit mysterious, but with some rework we
can use code which avoids dealing with the floats in this way entirely
and make our testcases happy.
|
|
|
|
|
|
With b4450f1dd6bca537e60406b2383ab154a3e1485f we dropped what we
calculated here later on and now that we don't need it in the meantime
either we can just skip the busy work by default and expect dpkg to do
the right thing dropping also our little "last explicit configures"
removal trick introduced in b4450f1dd6bca537e60406b2383ab154a3e1485f.
This enables the last of a bunch of previously experimental options,
some of them existing still, but are very special and hence not really
worth documenting anymore (especially as it would need to be rewritten
now entirely) which is why the documentation is nearly completely
dropped.
The order of configuration stanzas in the simulation code changes
slightly as it isn't concerning itself with finding the 'right' order,
but any order is valid anyhow as long as the entire set happens in the
same call.
|
|
If a planner lets actions to be figured out by dpkg in pending calls
these actions aren't mentioned in a simulation. While that might be
a good thing for debugging, it would be a change in behavior and
especially if a planner avoids explicit removals could be confusing for
users. As such we perform the same 'trick' as in the dpkg implementation
by performing explicitly what would be done by the pending calls.
To save us some work and avoid desyncs we perform a layer violation by
using deb/ code in the generic simulation – and further we perform ugly
dynamic_cast to avoid breaking the ABI for nothing; aptitude is the only
other user of the simulation class according to codesearch.d.n and for
that our little trick works. It just isn't working if you happen to
extend pkgSimulate or otherwise manage to call the protected Go methods
directly – which isn't very realistic/practical.
|
|
The user has to approve the removal of a crossgraded package as it might
be needed to remove it (temporarily) in the process, but in most cases
we can happily avoid it and let dpkg unpack over it skipping the
remove. This has some effects on progress reporting and how deal with
selections through which makes this a tiny bit complicated.
|
|
If the URI had no password the username was ignored
|
|
To prevent accidents like adding http-sources while using tor+http it
can make sense to allow disabling methods. It might even make sense to
allow "redirections" and adding "symlinked" methods via configuration.
This could e.g. allow using different options for certain sources by
adding and configuring a "virtual" new method which picks up the config
based on the name it was called with like e.g. http does if called as
tor+http.
|
|
Socks support is a requested feature in sofar that the internet is
actually believing Acquire::socks::Proxy would exist. It doesn't and
this commit isn't adding it as that isn't how our configuration works,
but it allows Acquire::http::Proxy="socks5h://…". The HTTPS method was
changed already to support socks proxies (all versions) via curl. This
commit implements only SOCKS5 (RFC1928) with no auth or pass&user auth
(RFC1929), but not GSSAPI which is required by the RFC. The 'h' in the
protocol name further indicates that DNS resolution is delegated to the
socks proxy rather than performed locally.
The implementation works and was tested with Tor as socks proxy for
which implementing socks5h only can actually be considered a feature.
Closes: 744934
|
|
Having the detection handled in specific (http) workers means that a
redirection loop over different hostnames isn't detected. Its also not a
good idea have this implement in each method independently even if it
would work
|
|
apt-transports not shipped in apt directly are usually named
apt-transport-% with % being what is in the name of the transport.
tor additional introduced aliases via %+something, which isn't a bad
idea, so be strip the +something part from the method name before
suggesting the installation of an apt-transport-% package.
This avoids us the maintainance of a list of existing transports
creating a two class system of known and unknown transports which would
be quite arbitrary and is unfriendly to backports.
|
|
Same reason and implementation as for configure.
|
|
A planner might not explicitly configure all packages, but we need to
know all packages which will be configured for progress reporting and to
tell the hook scripts about them as they rely on this for their own
functionality.
|
|
If we want a package to be purged from the system tell dpkg in the
ordering (if it has to touch it explicitly) to remove it and cover the
purging of the config files at the end with a --purge --pending call.
That should help packages move conffiles around between packages
correctly even if the user is purging packages directly in big actions
like dist-upgrades involving many packages.
|
|
Implemented a long while ago now with relatively good progress reporting
involving triggers is a good time to try delaying the execution of
triggers across dpkg invocations finally by default.
Note: The bugreport talks also about 'smarter' configuration which is a
much bigger part and approached from multiple directions, but doesn't
really involve triggers per-se so considering it decoupled should help
in getting it done…
Closes: #626599
|
|
Telling dpkg early on that we are going to remove these packages later
helps it with auto-deconfiguration decisions and its another area where
a planner can ignore the nitty gritty details and let dpkg decide the
course of action if there are no special requirements.
|
|
dpkg decides certain things on its own based on selections and
especially if we want to call --pending on purge/remove actions, we need
to ensure a clean slate or otherwise we surprise the user by removing
packages we weren't allowed to remove by the user in this run (the
selection might be an overarching plan for the not-yet "future").
Ideally dpkg would have some kind of temporal selection interface for
this case, but it hasn't, so we make it temporal with the risk of
loosing state if we don't manage to restore them.
|
|
Having long commandlines split into two is a huge problem if it happens
and additionally if we want to introduce planners which perform less
micromanagment its a good idea to leave the details for dpkg to decide.
In practice this doesn't work yet unconditionally as a bug is hiding in
the ordering code of dpkg, but it works if apt imposes its ordering so
this commit allows for now at least to solve the first problem.
|
|
APT (usually) knows which package is essential or not, so we can avoid
passing this force flag to dpkg unconditionally if the user hasn't
chosen a non-default essential handling obscuring the information.
|
|
|
|
Bye, bye, old friend.
|
|
This was dropped in autotools as I found no use of the HAVE_PTSNAME_R
macro. Turns out it was typoed as HAVE_PTS_NAME_R. Fix the #ifdef and
add checks to CMake for it.
Closes: #833674
|
|
If we receive an interrupt, set a flag and do not abort
immediately without waiting for the child. Once the child
exited, exit with an error if the interrupted flag is set.
Closes: #832593
|
|
Introduce an initial CMake buildsystem. This build system can build
a fully working apt system without translation or documentation.
The FindBerkelyDB module is from kdelibs, with some small adjustements
to also look in db5 directories.
Initial work on this CMake build system started in 2009, and was
resumed in August 2016.
|
|
Create a temporary configuration file with a dump of our
configuration and pass that to apt-key.
LP: #1607283
|
|
|
|
Create a local exiter object which cleans up files on exit.
|
|
Previously, when data could be created and sig not, we would unlink
sig, not data (and vice versa).
|
|
Followup of b58e2c7c56b1416a343e81f9f80cb1f02c128e25.
Still a regression of sorts of 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf.
Closes: 832044
|
|
If a solver/planner exits before apt is done writing we will generate
write errors. Solvers like 'dump' can be pretty quick in failing but
produce a valid EDSP error report apt should read, parse and display
instead of just discarding even through we had write errors.
|
|
There is no point in trying to perform Write/Read on a FileFd which
already failed as they aren't going to work as expected, so we should
make sure that they fail early on and hard.
|