Age | Commit message (Collapse) | Author |
|
If a server closes a connection after sending us a file that tends to
mean that its a type of server who always closes the connection – it is
therefore relatively pointless to try pipelining with it even if it
isn't a problem by itself: apt is just restarting the pipeline each
time after it got served one file and the connection is closed.
The problem starts if one or more proxies are between the server and apt
and they disagree about how the connection should be as in the
bugreporters case where the responses apt gets contain both Keep-Alive
and Proxy-Connection headers (which apt both ignores) indicating a
proxy is trying to keep a connection open while the response also
contains "Connection: close" indicating the opposite which apt
understands and respects as it is required to do.
We avoid stepping into this abyss by not performing pipelining anymore
if we got a respond with the indication to close connection if the
response was otherwise a success – error messages are sent by some
servers via this method as their pages tend to be created dynamically
and hence their size isn't known a priori to them.
Closes: #832113
(cherry picked from commit 9714d522056e5256f5a2de587d88eba7cb3291c2)
|
|
It seems completely pointless from a server-POV to sent empty header
fields, so most of them don't do it (simply proven by this limitation
existing since day one) – but it is technically allowed by the RFC as
the surounding whitespaces are optional and Github seems to like sending
"X-Geo-Block-List:\r\n" since recently (bug reports in other http
clients indicate July) at least sometimes as the reporter claims to have
seen it on https only even through it can happen with both.
Closes: 834048
(cherry picked from commit 148c049150cc39f2e40894c1684dc2aefea1117e)
|
|
If another file in the transaction fails and hence dooms the transaction
we can end in a situation in which a -patched file (= rred writes the
result of the patching to it) remains in the partial/ directory.
The next apt call will perform the rred patching again and write its
result again to the -patched file, but instead of starting with an empty
file as intended it will override the content previously in the file
which has the same result if the new content happens to be longer than
the old content, but if it isn't parts of the old content remain in the
file which will pass verification as the new content written to it
matches the hashes and if the entire transaction passes the file will be
moved the lists/ directory where it might or might not trigger errors
depending on if the old content which remained forms a valid file
together with the new content.
This has no real security implications as no untrusted data is involved:
The old content consists of a base file which passed verification and a
bunch of patches which all passed multiple verifications as well, so the
old content isn't controllable by an attacker and the new one isn't
either (as the new content alone passes verification). So the best an
attacker can do is letting the user run into the same issue as in the
report.
Closes: #831762
(cherry picked from commit 0e071dfe205ad21d8b929b4bb8164b008dc7c474)
|
|
Also fixes message itself to mention the correct option name as noticed
in #832113.
(cherry picked from commit b9c20219dc17db1d29eaf297263a4b008bd1b90b)
|
|
We read the entire input file we want to patch anyhow, so we can also
calculate the hash for that file and compare it with what he had
expected it to be.
Note that this isn't really a security improvement as a) the file we
patch is trusted & b) if the input is incorrect, the result will hardly be
matching, so this is just for failing slightly earlier with a more
relevant error message (althrough, in terms of rred its ignored and
complete download attempt instead).
(cherry picked from commit 6e71ec6fcdcaa926c98fa58cd4af38e42556df15)
|
|
Instead of only trying the first host we get via SRV, we try them all as
we are supposed to and if that isn't working we try to connect to the
host itself as if we hadn't seen any SRV records. This was already the
intend of the old code, but it failed to hide earlier problems for the
next call, which would unconditionally fail then resulting in an all
around failure to connect. With proper stacking we can also keep the
error messages of each call around (and in the order tried) so if the
entire connection fails we can report all the things we have tried while
we discard the entire stack if something works out in the end.
(cherry picked from commit 3af3ac2f5ec007badeded46a94be2bd06b9917a2)
|
|
If we don't give a specific error to report up it is likely that all
error currently in the error stack are equally important, so reporting
just one could turn out to be confusing e.g. if name resolution failed
in a SRV record list.
(cherry picked from commit b50dfa6b2dd2d459e0c2746ac9367982b96ffac0)
|
|
If we have files in partial/ from a previous invocation or similar such
those could be symlinks created by file:// sources. The code is
expecting only real files through and happily changes owner,
modification times and permission on the file the symlink points to
which tend to be files we have no business in touching in this way.
Permissions of symlinks shouldn't be changed, changing owner is usually
pointless to, but just to be sure we pick the easy way out and use
lchown, check for symlinks before chmod/utimes.
Reported-By: Mattia Rizzolo on IRC
(cherry picked from commit 3465138575e1fd0d5892d9b6be1ae232eb873460)
|
|
methods/http.cc:640:13: runtime error: reference binding to null pointer
of type 'struct FileFd'
This reference is never used in the cases it has a nullptr, so the
practical difference is non-existent, but its a bug still.
Reported-By: gcc -fsanitize=undefined
(cherry picked from commit 4460551841d909d3ee9c1de00156ed3cdf8b1665)
|
|
Seen in #828011 if we fail to parse a header field like Last-Modified we
end up interpreting the data as response header for coming requests in
case we don't rotate to a new server in DNS rotation.
(cherry picked from commit cc0a4c82b3c132abba9b9ec35fd61bc8b45a1b80)
|
|
wu-ftpd sends the response without parens, whereas we expect
them.
I did not test the patch, but it should work. I added another
return true if Pos is still npos after the second find to make
sure we don't add npos to the string.
Thanks: Lukasz Stelmach for the initial patch
Closes: #420940
(cherry picked from commit 25a694165ae46c159e0d91bf0b27717f00dbc66e)
|
|
Setting the C++ locale via std::locale::global(std::locale("")); which
would otherwise default to the default C locale (aka: unaffected by
setlocale) effects the formatting of numeric types in IO streams, which
for output for humans is perfectly sensible, but breaks our many text
interfaces used and parsed by us and others without expecting the
numbers to be formatted.
Closes: #825396
(cherry picked from commit b58e2c7c56b1416a343e81f9f80cb1f02c128e25)
|
|
Daniel Kahn Gillmor highlights in the bugreport that security isn't
improving by having the user import additional keys – especially as
importing keys securely is hard.
The bugreport was initially about dropping the warning to a notice, but
in given the previously mentioned observation and the fact that we
weren't printing a warning (or a notice) for expired or revoked keys
providing a signature we drop it completely as the code to display a
message if this was the only key is in another path – and is considered
critical.
Closes: 618445
(Backported from commit fb7b11ebb852fa255053ecab605bc9cfe9de0603)
|
|
when using the https transport mechanism, $no_proxy is ignored if apt is
getting it's proxy information from $https_proxy (as opposed to
Acquire::https::Proxy somewhere in apt config). if the source of proxy
information is Acquire::https::Proxy set in apt.conf (or apt.conf.d),
then $no_proxy is honored.
(cherry picked from commit 8707edd9e4684ed68856cd8eeff15ebd1e8c88ea)
|
|
With the previous fix for file applied we can again hit repositories
which contain uncompressed empty files, which since the introduction of
the central store: method wasn't accounted for anymore as we forbid
empty compressed files.
|
|
A silly of-by-one error in the stripping of the extension to check for
the uncompressed filename broken in an attempt to support all
compressions in commit a09f6eb8fc67cd2d836019f448f18580396185e5.
Fixing this highlights also mistakes in the handling of the Alt-Filename
in libapt which would cause apt to remove the file from the repository
(if root has the needed rights – aka the disk isn't readonly or similar)
|
|
Introduces APT::Hashes::<NAME> with entries Untrusted and Weak
which can be set to true to cause the hash to be treated as
untrusted and/or weak.
|
|
Our own gpgv method can declare a digest algorithm as untrusted and
handles these as worthless signatures. If gpgv comes with inbuilt
untrusted (which is called weak in official terminology) which it e.g.
does for MD5 in recent versions we should handle it in the same way.
To check this we use the most uncommon still fully trusted hash as a
configureable one via a hidden config option to toggle through all of
the three states a hash can be in.
|
|
Using erase(pos) is invalid in our case here as pos must be a valid and
derefenceable iterator, which isn't the case for an end-iterator (like
if we had no good signature).
The problem runs deeper still through as VALIDSIG is a keyid while
GOODSIG is just a longid so comparing them will always fail.
Closes: 818910
|
|
There was a complaint that, in the previous message,
the key fingerprint could be mistaken for a SHA1 digest
due to the (SHA1) after it.
Gbp-Dch: ignore
|
|
This should be easy to extend in the future and allow us to simplify
the error handling cases somewhat.
Thanks: Ron Lee for wording suggestions
|
|
We will drop support for those in the future.
Also adjust the std::array to be a std::vector, as that's easier to
maintain.
|
|
This can be used by workers to send warnings to the main
program. The messages will be passed to _error->Warning()
by APT with the URI prepended.
We are not going to make that really public now, as the
interface might change a bit.
|
|
We added weak signatures to BadSigners, meaning that a Release file
signed by both a weak signature and a strong signature would be
rejected; preventing people from migrating from DSA to RSA keys
in a sane way.
Instead of using BadSigners, treat weak signatures like expired
keys: They are no good signatures, and they are worthless.
Gbp-Dch: ignore
|
|
This keeps a list of weak digest algorithms. For now, only MD5
is disabled, as SHA1 breaks to many repos.
|
|
This reverts commit 76a71a1237d22c1990efbc19ce0e02aacf572576.
That commit broke the test suite.
Gbp-Dch: ignore
|
|
ERRSIG is created whenever a key uses an unknown/weak digest
algorithm, for example. This allows us to report a more useful
error than just "unknown apt-key error.":
The following signatures were invalid: ERRSIG 13B00F1FD2C19886 1 2 01 1457609403 5
While still not being the best reportable error message, it's
better than unknown apt-key error and hopefully redirects users
to complain to their repository owners.
|
|
We basically ignored errors from writing and flushing, let's
not do that.
|
|
Reported-By: cppcheck
Git-Dch: Ignore
|
|
Just enabling it for anyone breaks with HTTP/1.0 servers and
proxies sometimes.
Closes: #810796
|
|
There is no reason to enforce that the file we start the bootstrap with
is compressed with a compressor which is available online. This allows
us to change the on-disk format as well as deals with repositories
adding/removing support for a specific compressor.
|
|
Adding a new compressor method meant adding a new method as well – even
if that boilt down to just linking to our generalized decompressor with
a new name. That is unneeded busywork if we can instead just call the
generalized decompressor and let it figure out which compressor to use
based on the filenames rather than by program name.
For compatibility we ship still 'gzip', 'bzip2' and co, but they are
just links to our "new" 'store' method.
|
|
Remove the SingleInstance flag so we can use the new randomized
queue feature to run parallel.
|
|
The PopFromSrvRecs() already removed the entry from the active
list, so the extra SrvRecords.erase() was incorrect.
Git-Dch: ignore
|
|
Buffered writes improve performance a lot, given that we spent
about 78% of the time in _write.
|
|
This accidentally slipped in in a previous commit, but it should
be used only for testing mode.
Reported-By: David Kalnischkies <david@kalnischkies.de>
|
|
This converts all callers that read machine-generated data,
callers that might work with user input are not converted.
|
|
This introduces a -t mode in which the first argument is input,
the second is output and the remaining are diffs.
This allows us to test patching compressed files, which are
detected using their file extension.
|
|
ssh expects various configuration bits to be usable like known hosts,
possibly keys and co. Setting this up needs some user work for probably
not a whole lot of benefits, so instead of forcing it upon users on
upgrade disable dropping for it by default.
Closes: 806511
|
|
Regression intoduced in 23e64f6d0facf9610c1042326ad9850e071e8349
|
|
In ce1f3a2c we started warning about failing unlinking, which we
consistently do for directories. That isn't a problem as directories
usually aren't in the places we do want to clean up – with the potential
exeception of "lost+found", so lets ignore it like we ignore our own
partial/ subdirectory.
Closes: 805424
|
|
AI_IDN is a glibc extension, but we can worry about this at the time
actually anyone is seriously trying apt on non-glibc systems.
Closes: 763437
|
|
Reported-By: cppcheck
Git-Dch: Ignore
|
|
Allows users who know what they are getting themselves into with this
trick to e.g. disable privilege dropping for e.g. file:// until they can
fix up the permissions on those repositories. It helps also the test
framework and people with a similar setup (= me) to run in less modified
environments.
|
|
Continueing on the track of dropping privileges in all methods, lets
drop it in copy, too, as the reasoning for it is very similar to file
and the interaction between the too quiet interesting as copy kinda
surfed as a fallback for file not being able to read the file. Both now
show a better error message as well as it was previously claiming to
have a hashsum mismatch, given that it couldn't read the file.
Git-Dch: Ignore
|
|
This flags is generally handy to avoid having to deal with ipv6 results on an
ipv4-only system, but it prevents e.g. the testcases from working if the
testsystem has no configured address at the moment (expect loopback), so
allow it to be sidestepped and let the testcases sidestep it.
Git-Dch: Ignore
|
|
Unlinking /dev/null is bad, we shouldn't do that. Also, we should print
at least a warning if we tried to unlink a file but didn't manage to
pull it of (ignoring the case were the file is /dev/null or doesn't
exist in the first place).
This got triggered by a relatively unlikely to cause problem in
pkgAcquire::Worker::PrepareFiles which would while temporary
uncompressed files (which are set to keep compressed) figure out that to
files are the same and prepare for sharing by deleting them. Bad move.
That also shows why not printing a warning is a bad idea as this hide
the error for in non-root test runs.
Git-Dch: Ignore
|
|
Detecting network errors has some benefits in the acquire system as if
we can't connect to a host trying it for a million files is pointless.
http and co which use connect.cc deal with this, but https which uses
curl had connection failures as "normal" errors which could potentially
be worked around (like trying Release instead of the failed InRelease).
Git-Dch: Ignore
|
|
Commit 653ef26c70dc9c0e2cbfdd4e79117876bb63e87d broke the camels back in
sofar that everything works in terms of our internal use of copy:/, but
external use is completely destroyed. This is kinda the reverse of what
happened in "parallel" in the sid branch, where external use was mostly
fine, internal and external exploded on the GzipIndexes option.
We fix this now by rewriting our internal use by letting copy:/ only do
what the name suggests it does: Copy files and not uncompress them
on-the-fly. Then we teach copy and the uncompressors how to deal with
/dev/null and use it as destination file in case we don't want to store
the uncompressed files on disk.
Closes: 799158
|
|
We drop it in decompressors, which are the natural next step, so if an
archive is used which isn't worldreadable (= not accessible by _apt) it
doesn't work anyway, so we just fail a bit earlier now and avoid all the
bad things which can happen over file (which could very well still be a
network resourc via NFS mounts or similar stuff, so hardly as safe as
the name might suggest at first).
|