Age | Commit message (Collapse) | Author |
|
A user can specify multiple fingerprints for a while now, so its seems
counter-intuitive to support only one keyring, especially if this isn't
really checked or enforced and while unlikely mixtures of both should
work properly, too, instead of a kinda random behaviour.
|
|
If we limit a file to be signed by a certain key it should usually
accept also being signed by any of this keys subkeys instead of
requiring each subkey to be listed explicitly. If the later is really
wanted we support now also the same syntax as gpg does with appending an
exclamation mark at the end of the fingerprint to force no mapping.
|
|
gpgs DETAILS documentation file declares that GOODSIG could report keyid
or fingerprint since gpg2, but for the time being it is still keyid
only. Who knows if that will ever change as that feels like an interface
break with dangerous security implications, but lets be better safe than
sorry especially as the code dealing with signed-by keyids is prepared
for this already. This code is rewritten still to have them all use the
same code for this type of problem.
|
|
Using the time of day for this is slightly wrong just like it is for
progress, just less visible.
|
|
The Stats method isn't called anywhere, was partly commented out before,
but we keep updating the time for it – lets avoid this pointless busywork.
Gbp-Dch: Ignore
|
|
120s is an insanely high default time out, lower it to 30s
to make things a bit nicer.
|
|
Correctly register timed out IP addresses from a timed out
select() call as a bad address so we do not try it again.
LP: #1766542
|
|
Closes: #898886
|
|
This shouldn't make a practical difference for most people, but for edge
cases it avoids DNS lookups and additionally prevents us from perfoming
unneeded SRV requests, too.
|
|
Prompted-by: Jakub Wilk <jwilk@debian.org>
|
|
Closes: #891644
|
|
LP: #1732030
Closes: #890489
Fixes meefik/linuxdeploy#869
|
|
It is sad that we can't wrap the cdrom method tighter at the moment, but
due to its ability to mount drives into arbitrary places via an external
suid binary we can't really do a lot better at the moment.
What we can do is set the options in the configuration space through as
it is standard in the other methods instead of doing it in main() which
is assumed to be more boilerplatey than actually doing something.
Gbp-Dch: Ignore
|
|
A mirror list we get from an non-local source like http shouldn't be
able to include e.g. file sources and even with other online sources we
need to be careful: They also shouldn't include prefixed methods like
'tor+http'. So apply magic based on how the method is called:
mirror+file will be allowed to redirect to any source while
tor+mirror+file allows all, but sends them to their tor+ variant.
|
|
The old implementation used to construct a query string including the
release(s) the mirrorlist should be for, but that is hard to deal with
as this rules out that partial mirrors are included in the list and it
turns out that nobody ended up implementing it on the server side.
Controlling this on the client side allows partial mirrors to be
included and as a bonus prevents that we tell the mirrorlist server
(this rather generic) user information.
|
|
Allowing a method to request work from other methods is a powerful
capability which could be misused or exploited, so to slightly limited
the surface let method opt-in into this capability on startup.
|
|
Embedding an entire acquire stack and HTTP logic in the mirror method
made it rather heavy weight and fragile. This reimplement goes the other
way by doing only the bare minimum in the method itself and instead
redirect the actual download of files to their proper methods.
The reimplementation drops the (in the real world) unused query-string
feature as it isn't really implementable in the new architecture.
|
|
Commit 47c0bdc310c8cd62374ca6e6bb456dd183bdfc07 ("report transient
errors as transient error") accidentally changed some connection
failures to become non-transient, because the result of the error
checks where being ignored and then fatal error was returned if an
error was pending - even if that error was trivial.
After the merge of pu/happy-eyeballs2a this becomes a lot clearer,
and easy to fix.
Gbp-Dch: ignore
Regression-Of: 47c0bdc310c8cd62374ca6e6bb456dd183bdfc07
|
|
Try establishing connections in alternating address families in
rapid intervals of 250 ms, adding more connections to the wait
list until one succeeds (RFC 8305, happy eyeballs 2).
It is important that WaitAndCheckErrors() waits until it has
a successful connection, a time out, or all connections failed
- otherwise the timing between tries might be wrong, and the
final long wait might exit early because one connection failed
without trying the others. Timing wise, this only works correctly
on Linux, as select() counts down there. But we rely on that in
some other places too, so this is not the time to fix that.
Timeouts are only reported in the final long wait - the short
inner waits are expected to time out more often, and multiple
times, we do not want to report them.
Closes: #668948
LP: #1308200
Gbp-Dch: paragraph
|
|
Extracting the error checking method allows us to reuse it
in different places, so we can move the waiting and checking
out of DoConnect() eventually.
Gbp-Dch: ignore
|
|
There's no real point in storing the IP address while resolving
it - failure messages include the IP address in any case. Do this
when picking the connection for actual use instead.
|
|
This struct holds information about a connection attempt, like
the addrinfo, the resolved address, the fd for the connection,
and so on.
Gbp-Dch: ignore
|
|
As a first step to implementing Happy Eyeballs version 2, we
need to order the list of hosts getaddrinfo() gave us so it
alternates between preferred and other address families.
RFC: https://tools.ietf.org/html/rfc8305
Gbp-Dch: ignore
|
|
The Fail method for acquire methods has a boolean parameter indicating
the transient-nature of a reported error. The problem with this is that
Fail is called very late at a point where it is no longer easily
identifiable if an error is indeed transient or not, so some calls were
and some weren't and the acquire system would later mostly ignore the
transient flag and guess by using the FailReason instead.
Introducing a tri-state enum we can pass the information about fatal or
transient errors through the callstack to generate the correct fails.
|
|
If retries are enabled only transient errors are retried, which are very
few errors. At least for some HTTP codes it could be beneficial to retry
them through so adding them seems like a good idea if only to be more
consistent in what we report.
|
|
The casts are useless, but the reports show some where we can actually
improve the code by replacing them with better alternatives like
converting whatever int type into a string instead of casting to a
specific one which might in the future be too small.
Reported-By: gcc -Wuseless-cast
|
|
We accidentally regressed here in 1.5 when replacing the https
method.
|
|
qemu-user passes prctl()-based seccomp through to the kernel,
umodified. That's bad, as it blocks the wrong syscalls.
We ignored EFAULT which fixed the problem for targets with different
pointer sizes from the host, but was a bad hack. In order to identify
qemu we can rely on the fact that qemu-user prints its version and
exits with 0 if QEMU_VERSION is set to an unsupported value. If we
run a command that should fail in such an environment, and it exits
with 0, then we are running in qemu-user.
apt-helper is an obvious command to run. The tests ensure it exits
with 1, and it only prints usage information. We also could not use
/bin/false because apt might just as well be from a foreign arch
while /bin/false is not.
Closes: #881519
|
|
We sleep in http.cc, so we should allow the sleeping syscalls.
|
|
The store method replaced them all, the symlinks where mostly
for partial upgrades or whatever, they should not be needed
any longer.
|
|
Sorting apparently calls sysconf() which calls sysinfo() to get
free pages or whatever.
Closes: #879814, #879826
|
|
This should help debugging crashes. The signal handler is a C++11
lambda, yay! Special care has been taken to only use signal handler
-safe functions inside there.
|
|
If seccomp is disabled, we fallback to running without it. Qemu fails
in the seccomp() call, returning ENOSYS and libseccomp falls back to
prctl() without adjusting the pointer, causing the EFAULT. I hope
qemu gets fixed at some point to return EINVAL for seccomp via
prctl.
Bug-Qemu: https://bugs.launchpad.net/qemu/+bug/1726394
|
|
If FAKED_MODE is set, enable SYSV IPC so we don't crash when
running in fakeroot.
Closes: #879662
|
|
Use OBJECT libraries for http and connect stuff, and move the
seccomp link expression into a global link_libraries() call.
This also fixes a bug where only the http target pulled in
the gnutls header arguments despite gnutls being used in
connect.cc, and thus by mirror and ftp as well.
Adjust translation support to ignore TARGET_OBJECTS sources
and add the OBJECT libraries to the translated files.
|
|
statx was introduced in 4.11, so it fails to build in stretch if
we just unconditionally use it.
|
|
These are a few overlooked syscalls. Also add readv(), writev(),
renameat2(), and statx() in case libc uses them.
Gbp-Dch: ignore
|
|
This reduces the number of syscalls to about 140 from about
350 or so, significantly reducing security risks.
Also change prepare-release to ignore the architecture lists
in the build dependencies when generating the build-depends
package for travis.
We might want to clean up things a bit more and/or move it
somewhere else.
|
|
This was a left over from the autodetect move.
Gbp-Dch: ignore
|
|
Sandboxing was turned off because we called pkgAcqMethod's
Configuration() instead of aptMethod's.
|
|
This avoids running the Proxy-Auto-Detect script inside the
untrusted (well, less trusted for now) sandbox. This will allow
us to restrict the http method from fork()ing or exec()ing via
seccomp.
|
|
APT connects just fine to any .onion address given, only if the connect
fails somehow it will perform checks on the sanity of which in this case
is checking the length as they are well defined and as the strings are
arbitrary a user typing them easily mistypes which apt should can be
slightly more helpful in figuring out by saying the onion hasn't the
required length.
|
|
This automatically removes any old apt-transport-https, as
apt now Breaks it unversioned.
|
|
Opening the file before we drop privileges in the methods allows us to
avoid chowning in the acquire main process which can apply to the wrong
file (imagine Binary scoped settings) and surprises users as their
permission setup is overridden.
There are no security benefits as the file is open, so an evil method
could as before read the contents of the file, but it isn't worse than
before and we avoid permission problems in this setup.
|
|
On HTTP Connect we since recently look into the auth.conf file for login
information, so we should really look for all proxies into the file as
the argument is the same as for sources entries and it is easier to
document (especially as the manpage already mentions it as supported).
|
|
We have support for an netrc-like auth.conf file since 0.7.25 (closing
518473), but it was never documented in apt that it even exists and
netrc seems to have fallen out of usage as a manpage for it no longer
exists making the feature even more arcane.
On top of that the code was a bit of a mess (as it is written in c-style)
and as a result the matching of machine tokens to URIs also a bit
strange by checking for less specific matches (= without path) first.
We now do a single pass over the stanzas.
In practice early adopters of the undocumented implementation will not
really notice the differences and the 'new' behaviour is simpler to
document and more usual for an apt user.
Closes: #811181
|
|
Failing on too much data is good, but we can do better by checking for
exact filesizes as we know with hashsums how large a file should be, so
if we get a file which has a size we do not expect we can drop it
directly, regardless of if the file is larger or smaller than what we
expect which should catch most cases which would end up as hashsum
errors later now a lot sooner.
|
|
We tend to operate on rather large static files, which means we usually
get Content-Length information from the server. If we combine this
information with the filesize we are expecting (factoring in pipelining)
we can avoid reading a bunch of data we are ending up rejecting anyhow
by just closing the connection saving bandwidth and time both for the
server as well as the client.
|
|
It is highly unlikely to encounter fields which start with HTTP in
practice, but we should really be a bit more restrictive here.
|
|
This makes it easier to see which headers includes what.
The changes were done by running
git grep -l '#\s*include' \
| grep -E '.(cc|h)$' \
| xargs sed -i -E 's/(^\s*)#(\s*)include/\1#\2 include/'
To modify all include lines by adding a space, and then running
./git-clang-format.sh.
|