Age | Commit message (Collapse) | Author |
|
apt tools do not really support these other variables, but tools apt
calls might, so lets play save and clean those up as needed.
Reported-By: Paul Wise (pabs) on IRC
|
|
We can't cleanup the environment like e.g. sudo would do as you usually
want the environment to "leak" into these helpers, but some variables
like HOME should really not have still the value of the root user – it
could confuse the helpers (USER) and HOME isn't accessible anyhow.
Closes: 842877
|
|
This fixes a regression introduced in
commit 8f858d560e3b7b475c623c4e242d1edce246025a
don't leak FD in AutoProxyDetect command return parsing
which accidentally made the proxy autodetection code also read
the scripts output on stderr, not only on stdout when it switched
the code from popen() to Popen().
Reported-By: Tim Small <tim@seoss.co.uk>
|
|
When checking if a file is empty, we forget to check that
fstat() actually worked.
|
|
Socks support is a requested feature in sofar that the internet is
actually believing Acquire::socks::Proxy would exist. It doesn't and
this commit isn't adding it as that isn't how our configuration works,
but it allows Acquire::http::Proxy="socks5h://…". The HTTPS method was
changed already to support socks proxies (all versions) via curl. This
commit implements only SOCKS5 (RFC1928) with no auth or pass&user auth
(RFC1929), but not GSSAPI which is required by the RFC. The 'h' in the
protocol name further indicates that DNS resolution is delegated to the
socks proxy rather than performed locally.
The implementation works and was tested with Tor as socks proxy for
which implementing socks5h only can actually be considered a feature.
Closes: 744934
|
|
There is no point in trying to perform Write/Read on a FileFd which
already failed as they aren't going to work as expected, so we should
make sure that they fail early on and hard.
|
|
Reported-By: cppcheck
Gbp-Dch: Ignore
|
|
The flush call is a no-op in most FileFd implementations so this isn't
as critical as it might sound as the only non-trivial implementation is
in the buffered writer, which tends not be used to buffer another
buffer…
|
|
Very unlikely, but if the parent is /dev/null, the child empty and the
grandchild a value we returned /dev/null/value which doesn't exist, so
hardly a problem, but for best operability we should be consistent in
our work and return /dev/null always.
|
|
If we have files in partial/ from a previous invocation or similar such
those could be symlinks created by file:// sources. The code is
expecting only real files through and happily changes owner,
modification times and permission on the file the symlink points to
which tend to be files we have no business in touching in this way.
Permissions of symlinks shouldn't be changed, changing owner is usually
pointless to, but just to be sure we pick the easy way out and use
lchown, check for symlinks before chmod/utimes.
Reported-By: Mattia Rizzolo on IRC
|
|
If libapt has builtin support for a compression type it will create a
dummy compressor struct with the Binary set to 'false' as it will catch
these before using the generic pipe implementation which uses the
Binary. The catching happens based on configured Names through, so you
can actually force apt to use the external binaries even if it would
usually use the builtin support. That logic fails through if you don't
happen to have these external binaries installed as it will fallback to
calling 'false', which will end in confusing 'Write error's.
So, this is again something you only encounter in constructed testing.
Gbp-Dch: Ignore
|
|
We deploy atomic renames for some files, but these renames also happen
if something about the file failed which isn't really the point of the
exercise…
Closes: 828908
|
|
Seen first in #826783, but as this buglog also shows leaked uncompressed
files as well we don't close it just yet.
|
|
This effects only compressors configured on the fly (rather then the
inbuilt ones as they use a library).
|
|
If the file is in a failed state there is no point in trying to flush
out the buffers as the file is to be discarded anyhow & its likely all
this flushing is producing is additional error messages.
Git-Dch: Ignore
|
|
Some methods had it missing, some used the keyword directly, which isn't
a problem as it is a cc file, but for consistency lets stick to our
macro for now.
Git-Dch: Ignore
|
|
The liblzma-based write code needs the same tweaks that the read code
already has to cope with the situation where lzma_code returns zero the
first time through because avail_out is zero, but will do more work if
called again.
This ports the read tweaks to the write code as closely as possible
(including matching comments etc.).
Closes: #751688
|
|
If we just reopened the file, we also need to reset the current
seek position when we reset the buffer, otherwise the code will
not try to seek to the position given to Skip (from 0), but will
try to seek to old offset + the position given to skip.
Closes: #812994, #813000
|
|
When writing into the buffer write to free() bytes starting
at getend(), instead of buffersize_max bytes at get()
-> get() is a read pointer.
This makes no difference in practice though, as we reset
the buffer before the call, so start = end = 0.
Gbp-Dch: ignore
|
|
We cannot just return false without setting an error,
as InternalWrite does not set one itself.
|
|
It makes no sense to split a large block into multiple small
blocks, so when we have the chance to write them unbuffered,
do so.
|
|
We do not need the loop, FileFd::Private() handles this for us.
Gbp-Dch: ignore
|
|
We want to check whether the amount of free space is smaller
than the requested write size. Checking maxsize - size() is
incorrect for bufferstart >= 0, as size() = end - start.
Gbp-Dch: ignore
|
|
|
|
This is a multiple of the page size and thus results in less
page faults, speeding up copying.
Also, while we're at at, unify all uses of that size in a
constant variable APT_BUFFER_SIZE.
|
|
Implement native support for LZ4 compression, using the official
lz4 library.
|
|
This makes code easier to read, and somewhat more correct.
Gbp-Dch: ignore
|
|
Gbp-Dch: ignore
|
|
Previously, if flush errored inside the loop, data could have
already been written to the wrapped descriptor without having
been removed from the buffer.
Also try to work around EINTR here. A better solution might be
to have the individual privates detect an interrupt and return
0 in such a case, instead of relying on errno being untouched
in between the syscall and the return from InternalWrite.
|
|
This avoids some issues with InternalWrite returning 0 because
it just cannot write stuff at the moment.
|
|
This is somewhat experimental right now, and might not work
for everyone, so it is on an opt-in basis.
|
|
The flush function can be used for buffered writers.
|
|
We will soon implement a buffered writing decorator and we will
need to forward attribute changes to those.
|
|
These can be used to implement write buffering
Gbp-Dch: ignore
|
|
Suggested by David.
Gbp-Dch: ignore
|
|
Gbp-Dch: ignore
|
|
There is not much point and this is more readable.
Gbp-Dch: ignore
|
|
This is mostly a documentation issue, as the size we want to
read is always less than or equal to the size of the buffer,
so the return value will be the same as the size argument.
Nonetheless, people wondered about it, and it seems clearer
to just always use the return value.
|
|
This further improves our performance, and rred on uncompressed
files now spents 78% of its time in writing. Which means that
we should really look at buffering those.
|
|
The code uses memmove() to move parts of the buffer to the
front when the buffer is only partially read. By simply
reading one page at a time, the maximum size of bytes that
must be moved has a hard limit, and performance improves:
In one test case, consisting of a 430 MB Contents file,
and a 75K PDiff, applying the PDiff previously took about
48 seconds and now completes in 2 seconds.
Further speed up can be achieved by buffering writes, they
account for about 60% of the run-time now.
|
|
Gbp-Dch: ignore
|
|
And as we are at it lets fix the 'style' issue I introduced with the
filefd changes as well.
Reported-By: gcc -fsanitize's & cppcheck
Git-Dch: Ignore
|
|
We don't need the buffer that often - only for ReadLine - as it is only
occasionally used, so it is actually more efficient to allocate it if
needed instead of statically by default. It also allows the caller to
influence the buffer size instead of hardcoding it.
Git-Dch: Ignore
|
|
The default implementation of ReadLine was very naive by just reading
each character one-by-one. That is kinda okay for libraries implementing
compression as they have internal buffers (but still not great), but
while working with files directly or via a pipe as there is no buffer
there so all those reads are in fact system calls.
This commit introduces an internal buffer in the FileFd implementation
which is only used by ReadLine. The more low-level Read and all other
actions remain unbuffered – they just changed to deal with potential
"left-overs" in the buffer correctly.
Closes: 808579
|
|
If we use the library to compress xz, still try to understand and pick
up the arguments we would have used to call xz to figure out which level
the user wants us to use instead of defaulting to level 6 (which is the
default level of xz).
|
|
dpkg switched from CRC32 to CRC64 in
777915108d9d36d022dc4fc4151a615fc95e5032 with the message:
| This is the default CRC used by the xz command-line tool, align with
| it and switch from CRC32 to CRC64. It should provide slightly better
| detection against damaged data, at a negligible speed difference.
|
|
This isn't implementing any new features, it is "just" moving code
around from FileFd methods which decided on each call how to handle the
request by including all logic for all possible compressor backends in
the method body to a model in which backend-specifics are implemented in
a FileFdPrivate subclass. This avoids a big chunk of #ifdef's and should
make it a tiny bit more obvious which backend uses which code.
The execution of the idea is slightly uglified by the need to preserve
ABI and API which causes liberal befriending.
Git-Dch: Ignore
|
|
There's no point trying to read 0 bytes, so let's just not
do this and switch to a while loop like in Write().
Gbp-Dch: ignore
|
|
Turn the do-while loop into while loops, so it simply does nothing
if the Size is already 0.
This reverts commit c0b271edc2f6d9e5dea5ac82fbc911f0e3adfa7a which
introduced a fix for a specific instance of the issue in the
CopyFile() function.
Closes: #808381
|
|
On EOF, ToRead will be 0, which might trigger on some systems (e.g.
on the Hurd) an error due to the invalid byte count passed to write().
The whole loop already checks for ToRead != 0, so perform the writing
step only when there was actual data read.
Closes: #808381
|