Age | Commit message (Collapse) | Author |
|
The https method implemented for a long while now a hardcoded fallback
to the same options in http, which, while it works, is rather inflexible
if we want to allow the methods to use another name to change their
behavior slightly, like apt-transport-tor does to https – most of the
diff being s#https#tor#g which then fails to do the full circle
fallthrough tor -> https -> http for https sources. With this config
infrastructure this could be implemented now.
|
|
cURL which backs our https implementation can handle redirects on its
own, but by dealing with them on our own we gain finer control over which
redirections will be performed (we don't like https → http) and by whom
so that redirections to other hosts correctly spawn a new https method
dealing with these instead of letting the current one deal with it.
|
|
Having the detection handled in specific (http) workers means that a
redirection loop over different hostnames isn't detected. Its also not a
good idea have this implement in each method independently even if it
would work
|
|
Closes: #623443
|
|
|
|
Bye, bye, old friend.
|
|
Introduce an initial CMake buildsystem. This build system can build
a fully working apt system without translation or documentation.
The FindBerkelyDB module is from kdelibs, with some small adjustements
to also look in db5 directories.
Initial work on this CMake build system started in 2009, and was
resumed in August 2016.
|
|
Followup of b58e2c7c56b1416a343e81f9f80cb1f02c128e25.
Still a regression of sorts of 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf.
Closes: 832044
|
|
If another file in the transaction fails and hence dooms the transaction
we can end in a situation in which a -patched file (= rred writes the
result of the patching to it) remains in the partial/ directory.
The next apt call will perform the rred patching again and write its
result again to the -patched file, but instead of starting with an empty
file as intended it will override the content previously in the file
which has the same result if the new content happens to be longer than
the old content, but if it isn't parts of the old content remain in the
file which will pass verification as the new content written to it
matches the hashes and if the entire transaction passes the file will be
moved the lists/ directory where it might or might not trigger errors
depending on if the old content which remained forms a valid file
together with the new content.
This has no real security implications as no untrusted data is involved:
The old content consists of a base file which passed verification and a
bunch of patches which all passed multiple verifications as well, so the
old content isn't controllable by an attacker and the new one isn't
either (as the new content alone passes verification). So the best an
attacker can do is letting the user run into the same issue as in the
report.
Closes: #831762
|
|
The rewrite in 742f67eaede80d2f9b3631d8697ebd63b8f95427 is based on the
assumption that the pipeline will always be at least one item short each
time it is called, but the logs in #832113 suggest that this isn't
always the case. I fail to see how at the moment, but the old
implementation had this behavior, so restoring it can't really hurt, can
it?
|
|
Also fixes message itself to mention the correct option name as noticed
in #832113.
|
|
We read the entire input file we want to patch anyhow, so we can also
calculate the hash for that file and compare it with what he had
expected it to be.
Note that this isn't really a security improvement as a) the file we
patch is trusted & b) if the input is incorrect, the result will hardly be
matching, so this is just for failing slightly earlier with a more
relevant error message (althrough, in terms of rred its ignored and
complete download attempt instead).
|
|
Instead of only trying the first host we get via SRV, we try them all as
we are supposed to and if that isn't working we try to connect to the
host itself as if we hadn't seen any SRV records. This was already the
intend of the old code, but it failed to hide earlier problems for the
next call, which would unconditionally fail then resulting in an all
around failure to connect. With proper stacking we can also keep the
error messages of each call around (and in the order tried) so if the
entire connection fails we can report all the things we have tried while
we discard the entire stack if something works out in the end.
|
|
If we don't give a specific error to report up it is likely that all
error currently in the error stack are equally important, so reporting
just one could turn out to be confusing e.g. if name resolution failed
in a SRV record list.
|
|
If we have files in partial/ from a previous invocation or similar such
those could be symlinks created by file:// sources. The code is
expecting only real files through and happily changes owner,
modification times and permission on the file the symlink points to
which tend to be files we have no business in touching in this way.
Permissions of symlinks shouldn't be changed, changing owner is usually
pointless to, but just to be sure we pick the easy way out and use
lchown, check for symlinks before chmod/utimes.
Reported-By: Mattia Rizzolo on IRC
|
|
methods/http.cc:640:13: runtime error: reference binding to null pointer
of type 'struct FileFd'
This reference is never used in the cases it has a nullptr, so the
practical difference is non-existent, but its a bug still.
Reported-By: gcc -fsanitize=undefined
|
|
All apt versions support numeric as well as 3-character timezones just
fine and its actually hard to write code which doesn't "accidently"
accepts it. So why change? Documenting the Date/Valid-Until fields in
the Release file is easy to do in terms of referencing the
datetime format used e.g. in the Debian changelogs (policy §4.4). This
format specifies only the numeric timezones through, not the nowadays
obsolete 3-character ones, so in the interest of least surprise we should
use the same format even through it carries a small risk of regression
in other clients (which encounter repositories created with
apt-ftparchive).
In case it is really regressing in practice, the hidden option
-o APT::FTPArchive::Release::NumericTimezone=0
can be used to go back to good old UTC as timezone.
The EDSP and EIPP protocols use this 'new' format, the text interface
used to communicate with the acquire methods does not for compatibility
reasons even if none of our methods would be effected and I doubt any
other would (in these instances the timezone is 'GMT' as that is what
HTTP/1.1 requires). Note that this is only true for apt talking to
methods, (libapt-based) methods talking to apt will respond with the
'new' format. It is therefore strongly adviced to support both also in
method input.
|
|
Seen in #828011 if we fail to parse a header field like Last-Modified we
end up interpreting the data as response header for coming requests in
case we don't rotate to a new server in DNS rotation.
|
|
wu-ftpd sends the response without parens, whereas we expect
them.
I did not test the patch, but it should work. I added another
return true if Pos is still npos after the second find to make
sure we don't add npos to the string.
Thanks: Lukasz Stelmach for the initial patch
Closes: #420940
|
|
Most servers who close the connection do not send a content-length as
this is redundant information usually, but some might and while testing
with our server and with 'aptwebserver::response-header::Connection' set
to 'close' I noticed that http hangs after a redirect in such cases, so
if we have the information, just use it instead of discarding it.
|
|
In 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf changing to usage of C++ way
of setting the locale causes us to be terminated in case of usage of an
ungenerated locale as LC_ALL (or similar) – but we don't want to fail
here, we just want to carry on as before with setlocale which we call in
that case just for good measure.
|
|
We use a wild mixture of C and C++ ways of generating output, so having
a consistent world-view in both styles sounds like a good idea and
should help in preventing regressions.
|
|
Setting the C++ locale via std::locale::global(std::locale("")); which
would otherwise default to the default C locale (aka: unaffected by
setlocale) effects the formatting of numeric types in IO streams, which
for output for humans is perfectly sensible, but breaks our many text
interfaces used and parsed by us and others without expecting the
numbers to be formatted.
Closes: #825396
|
|
gpg doesn't give use a UID on NODATA, which we were "expecting" (but not
using for anything), but just an error number. Instead of collecting
these as badsigners which will trigger a "invald signature" error with
remarks like "NODATA 1" we instead adapt a message similar to the NODATA
error of a clearsigned file (which is actually not reached anymore as we
split them up, which fails with a NOSPLIT error, which uses the same
general error message).
In other words: Not a security relevant change, just a user experience
improvement as we now point them to the most likely cause of the
problem instead of saying "invalid signature" which would point them in
the direction of the archive being broken (for everyone) instead.
Closes: 823746
|
|
A keyring file can include multiple keys, so its only fair for
transitions and such to support multiple fingerprints as well.
|
|
We parse the messages we receive into two big categories: Most of the
messages have a keyid as well as a userid and as they are errors we want
to show the userids as well. The other category is also errors, but have
no userid (like NO_PUBKEY). Explicitly expressing this in code should
make it a bit easier to look at and it also help in dropping additional
fields or just the newline at the end consistently.
Git-Dch: Ignore
|
|
Daniel Kahn Gillmor highlights in the bugreport that security isn't
improving by having the user import additional keys – especially as
importing keys securely is hard.
The bugreport was initially about dropping the warning to a notice, but
in given the previously mentioned observation and the fact that we
weren't printing a warning (or a notice) for expired or revoked keys
providing a signature we drop it completely as the code to display a
message if this was the only key is in another path – and is considered
critical.
Closes: 618445
|
|
Signatures on data can have an expiration date, too, which we hadn't
handled previously explicitly (no problem – gpg still has a non-zero
exit code so apt notices the invalid signature) so the error message
wasn't as helpful as it could be (aka mentioning the key signing it).
|
|
The upstream documentation says about KEYEXPIRED:
"This status line is not very useful". Indeed, it doesn't mention which
key is expired, and suggests to use the other message which does.
|
|
when using the https transport mechanism, $no_proxy is ignored if apt is
getting it's proxy information from $https_proxy (as opposed to
Acquire::https::Proxy somewhere in apt config). if the source of proxy
information is Acquire::https::Proxy set in apt.conf (or apt.conf.d),
then $no_proxy is honored.
|
|
We have this situation in cases were parts of the transaction are
refused (e.g. in a hashsum mismatch) and rerun the update (e.g. in the
hope that we get a mirror which is synced this time).
Previously we would ask the server with an if-range and in the best case
recieve a 416 in response (less featureful server might end up giving us
the entire file again or we get the wrong file this time giving us a
hashsum mismatch…), which is a waste of time if we know already by
checking the hashsums that we got the complete and correct file.
|
|
With the previous fix for file applied we can again hit repositories
which contain uncompressed empty files, which since the introduction of
the central store: method wasn't accounted for anymore as we forbid
empty compressed files.
|
|
A silly of-by-one error in the stripping of the extension to check for
the uncompressed filename broken in an attempt to support all
compressions in commit a09f6eb8fc67cd2d836019f448f18580396185e5.
Fixing this highlights also mistakes in the handling of the Alt-Filename
in libapt which would cause apt to remove the file from the repository
(if root has the needed rights – aka the disk isn't readonly or similar)
|
|
Introduces APT::Hashes::<NAME> with entries Untrusted and Weak
which can be set to true to cause the hash to be treated as
untrusted and/or weak.
|
|
Our own gpgv method can declare a digest algorithm as untrusted and
handles these as worthless signatures. If gpgv comes with inbuilt
untrusted (which is called weak in official terminology) which it e.g.
does for MD5 in recent versions we should handle it in the same way.
To check this we use the most uncommon still fully trusted hash as a
configureable one via a hidden config option to toggle through all of
the three states a hash can be in.
|
|
Using erase(pos) is invalid in our case here as pos must be a valid and
derefenceable iterator, which isn't the case for an end-iterator (like
if we had no good signature).
The problem runs deeper still through as VALIDSIG is a keyid while
GOODSIG is just a longid so comparing them will always fail.
Closes: 818910
|
|
There was a complaint that, in the previous message,
the key fingerprint could be mistaken for a SHA1 digest
due to the (SHA1) after it.
Gbp-Dch: ignore
|
|
This should be easy to extend in the future and allow us to simplify
the error handling cases somewhat.
Thanks: Ron Lee for wording suggestions
|
|
We will drop support for those in the future.
Also adjust the std::array to be a std::vector, as that's easier to
maintain.
|
|
This can be used by workers to send warnings to the main
program. The messages will be passed to _error->Warning()
by APT with the URI prepended.
We are not going to make that really public now, as the
interface might change a bit.
|
|
We added weak signatures to BadSigners, meaning that a Release file
signed by both a weak signature and a strong signature would be
rejected; preventing people from migrating from DSA to RSA keys
in a sane way.
Instead of using BadSigners, treat weak signatures like expired
keys: They are no good signatures, and they are worthless.
Gbp-Dch: ignore
|
|
This keeps a list of weak digest algorithms. For now, only MD5
is disabled, as SHA1 breaks to many repos.
|
|
This reverts commit 76a71a1237d22c1990efbc19ce0e02aacf572576.
That commit broke the test suite.
Gbp-Dch: ignore
|
|
ERRSIG is created whenever a key uses an unknown/weak digest
algorithm, for example. This allows us to report a more useful
error than just "unknown apt-key error.":
The following signatures were invalid: ERRSIG 13B00F1FD2C19886 1 2 01 1457609403 5
While still not being the best reportable error message, it's
better than unknown apt-key error and hopefully redirects users
to complain to their repository owners.
|
|
We basically ignored errors from writing and flushing, let's
not do that.
|
|
Reported-By: cppcheck
Git-Dch: Ignore
|
|
Just enabling it for anyone breaks with HTTP/1.0 servers and
proxies sometimes.
Closes: #810796
|
|
There is no reason to enforce that the file we start the bootstrap with
is compressed with a compressor which is available online. This allows
us to change the on-disk format as well as deals with repositories
adding/removing support for a specific compressor.
|
|
Adding a new compressor method meant adding a new method as well – even
if that boilt down to just linking to our generalized decompressor with
a new name. That is unneeded busywork if we can instead just call the
generalized decompressor and let it figure out which compressor to use
based on the filenames rather than by program name.
For compatibility we ship still 'gzip', 'bzip2' and co, but they are
just links to our "new" 'store' method.
|
|
Remove the SingleInstance flag so we can use the new randomized
queue feature to run parallel.
|