summaryrefslogtreecommitdiff
path: root/apt-pkg/acquire.cc
AgeCommit message (Collapse)Author
2017-08-04don't hang if multiple sources use unavailable methodDavid Kalnischkies
APT clients always noticed if a method isn't supported and nowadays generate a message of the form: E: The method driver …/foobar could not be found. N: Is the package apt-transport-foobar installed? This only worked if a single source was using such an unavailable method through as we were registering the failed config the first round and the second would try to send requests to the not started method, which wouldn't work and hang instead (+ hiding the error messages as they would be shown only at the end of the execution). Closes: 870675
2017-07-26allow the auth.conf to be root:root ownedDavid Kalnischkies
Opening the file before we drop privileges in the methods allows us to avoid chowning in the acquire main process which can apply to the wrong file (imagine Binary scoped settings) and surprises users as their permission setup is overridden. There are no security benefits as the file is open, so an evil method could as before read the contents of the file, but it isn't worse than before and we avoid permission problems in this setup.
2017-07-26send weak-only hashes to methodsDavid Kalnischkies
Weak hashes like filesize can be used by methods for basic checks and early refusals even if we can't use them for hard security proposes. Normal apt operations are not affected by this as they fail if no strong hash is available, but if apt is forced to work with weak-only files or e.g. in apt-helper context it can have benefits as weak is better than no hash for the methods.
2017-07-12Reformat and sort all includes with clang-formatJulian Andres Klode
This makes it easier to see which headers includes what. The changes were done by running git grep -l '#\s*include' \ | grep -E '.(cc|h)$' \ | xargs sed -i -E 's/(^\s*)#(\s*)include/\1#\2 include/' To modify all include lines by adding a space, and then running ./git-clang-format.sh.
2017-06-28allow frontends to override releaseinfo change behaviourDavid Kalnischkies
Having messages being printed on the error stack and confirm them by commandline flags is an okayish first step, but some frontends will probably want to have a more interactive feeling here with a proper question the user can just press yes/no for as for some frontends a commandline flag makes no sense…
2017-06-26Avoid chdir in acquire clean with unlinkatDavid Kalnischkies
POSIX.1-2008 gives us a range of *at calls to deal with files including the unlinkat so we can remove a file from a directory based on a path to the file relative to the directory. (In our case here the path we have is just the filename) We avoid changing directories in this way which e.g. fails if the directory we started in no longer exists or is otherwise inaccessible. Closes: 860738
2017-01-28Only merge acquire items with the same meta keyJulian Andres Klode
Since the introduction of by-hash, two differently named files might have the same real URL. In our case, the files icons-64x64.tar.gz and icons-128x128.tar.gz of empty tarballs. APT would try to merge them and end with weird errors because it completed the first download and enters the second stage for decompressing and verifying. After that it would queue a new item to copy the original file to the location, but that copy item would be in the wrong stage, causing it to use the hashes for the decompressed item. Closes: #838441
2017-01-19remove 'old' FAILED files in the next acquire callDavid Kalnischkies
If apt renames a file to .FAILED it leaves its namespace and is never touched again – expect since 1.1~exp4 in which "apt clean" will remove those files. The usefulness of these files rapidly degrades if you don't keep the update log itself (together with debug output in the best case) through and on 99% of all system they will be kept around forever just to collect dust over time and eat up space. With this commit an update call will remove all FAILED files of previous runs, so that the FAILED files you have on disk are always only the ones related to the last apt run stopping apt from hoarding files. Closes: 846476
2016-12-16reword "Can't drop priv" warning messageDavid Kalnischkies
Note: This is a warning about disabling a security feature. It is supposed to be scary as we are disabling a security feature and we can't just be silent about it! Downloads really shouldn't happen any longer as root to decrease the attack surface – but if a warning causes that much uproar, consider what an error would do… The old WARNING message: | W: Can't drop privileges for downloading as file 'foobar' couldn't be | accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied) is frequently (incorrectly) considered to be an error message indicating that the download didn't happen which isn't the case, it was performed, but without all the security features enabled we could have used if run from some other place… The word "unsandboxed" is chosen as the term 'sandbox(ed)' is a common encounter in feature lists/changelogs and more people are hopefully able to make the connection to 'security' than it is the case for 'privilege dropping' which is more correct, but far less known. Closes: #813786 LP: #1522675
2016-11-25get pdiff files from the same mirror as the indexDavid Kalnischkies
In ad9416611ab83f7799f2dcb4bf7f3ef30e9fe6f8 we fall back to asking the original mirror (e.g. a redirector) if we do not get the expected result. This works for the indexes, but patches are a different beast and much simpler. Adding this fallback code here seems like overkill as they are usually right along their Index file, so actually forward the relevant settings to the patch items which fixes pdiff support combined with a redirector and partial mirrors as in such a situation the pdiff patches would be 404 and the complete index would be downloaded.
2016-09-02acquire: Use priority queues and a 3 stage pipeline designJulian Andres Klode
Employ a priority queue instead of a normal queue to hold the items; and only add items to the running pipeline if their priority is the same or higher than the priority of items in the queue. The priorities are designed for a 3 stage pipeline system: In stage 1, all Release files and .diff/Index files are fetched. This allows us to determine what files remain to be fetched, and thus ensures a usable progress reporting. In stage 2, all Pdiff patches are fetched, so we can apply them in parallel with fetching other files in stage 3. In stage 3, all other files are fetched (complete index files such as Contents, Packages). Performance improvements, mainly from fetching the pdiff patches before complete files, so they can be applied in parallel: For the 01 Sep 2016 03:35:23 UTC -> 02 Sep 2016 09:25:37 update of Debian unstable and testing with Contents and appstream for amd64 and i386, update time reduced from 37 seconds to 24-28 seconds. Previously, apt would first download new DEP11 icon tarballs and metadata files, causing the CPU to be idle. By fetching the diffs in stage 2, we can now patch our contents and Packages files while we are downloading the DEP11 stuff.
2016-08-27Merge branch 'portability/freebsd'Julian Andres Klode
2016-08-26Make root group configurable via ROOT_GROUPJulian Andres Klode
This is needed on BSD where root's default group is wheel, not root.
2016-08-26Use C locale instead of C.UTF-8 for protocol stringsJulian Andres Klode
The C.UTF-8 locale is not portable, so we need to use C, otherwise we crash on other systems. We can use std::locale::classic() for that, which might also be a bit cheaper than using locale("C").
2016-08-24improve code & doc for aquire weak/loop failingDavid Kalnischkies
Improve-Upon: 2e2865ae53a65c00dd55a892d5b48458f3110366 Reported-By: Julian Andres Klode Gbp-Dch: Ignore
2016-08-24do fail on weakhash/loop earlier in acquireDavid Kalnischkies
The bugreport shows a segfault caused by the code not doing the correct magical dance to remove an item from inside a queue in all cases. We could try hard to fix this, but it is actually better and also easier to perform these checks (which cause instant failure) earlier so that they haven't entered queue(s) yet, which in return makes cleanup trivial. The result is that we actually end up failing "too early" as if we wouldn't be careful download errors would be logged before that process was even started. Not a problem for the acquire system, but likely to confuse users and programs alike if they see the download process producing errors before apt was technically allowed to do an acquire (it didn't, so no violation, but it looks like it to the untrained eye). Closes: 835195
2016-08-23prevent C++ locale number formatting in text APIs (try 3)David Kalnischkies
This time it is the formatting of floating numbers in progress reporting with a radix charater potentially not being dot. Followup of 7303e11ff28f920a6277c159aa46f80c007350bb. Regression of b58e2c7c56b1416a343e81f9f80cb1f02c128e25 in so far as it exchanging very effected with slightly less effected code. LP: 1611010
2016-05-27prevent C++ locale number formatting in text APIsDavid Kalnischkies
Setting the C++ locale via std::locale::global(std::locale("")); which would otherwise default to the default C locale (aka: unaffected by setlocale) effects the formatting of numeric types in IO streams, which for output for humans is perfectly sensible, but breaks our many text interfaces used and parsed by us and others without expecting the numbers to be formatted. Closes: #825396
2016-05-27fix and document on the fly compressor configDavid Kalnischkies
libapt allows to configure compressors to be used by its system via configuration implemented in 03bef78461c6f443187b60799402624326843396, but that was never really documented and also only partly working, which also explains why the tests weren't using it…
2016-05-07delay progress until Release files are downloadedDavid Kalnischkies
Progress reporting used an "upper bound" on files we might get, expect that this wasn't correct in case pdiff entered the picture. So instead of calculating a value which is perhaps incorrect, we just accept that we can't tell how many files we are going to download and just keep at 0% until we know. Additionally, if we have pdiffs we wait until we got these (sub)index files, too. That could all be done better by downloading all Release files first and planing with them in hand accordingly, but one step at a time.
2016-04-25make random acquire queues work less randomDavid Kalnischkies
Queues feeding workers like rred are created in a random pattern to get a few of them to run in parallel – but if we already have an idling queue we don't need to assign it to a (potentially new) random queue as that saves us the (agruably small) overhead of starting up a new queue, avoids adding jobs to an already busy queue while others idle and as a bonus reduces the size of debug logs a bit. We also keep starting new queues now until we reach our limit before we assign work at random to them, which should give us a more effective utilisation overall compared to potentially adding work to busy queues while we haven't reached our queue limit yet.
2016-03-16Get accurate progress reporting in apt update againMichael Vogt
For the non-pdiff case, we have can have accurate progress reporting because after fetching the {,In}Release files we know how many IndexFiles will be fetched and what size they have. Therefore init the filesize early (in pkgAcqIndex::Init) and ensure that in Acquire::Pulse() looks at already downloaded bits when calculating the progress in Acquire::Pulse. Also improve debug output of Debug::acquire::progress
2016-02-11always download changelogs into /tmp firstDavid Kalnischkies
pkgAcqChangelog has the default behaviour of downloading a changelog to a temporary directory (inside /tmp, not /tmp directly), which is cleaned up on shutdown, but this can be overridden to store the changelog more permanently – but that caries a permission problem. For changelog we can 'easily' solve this by always downloading to a temporary directory and only move it out of there on done.
2016-01-15revert file-hash based action-merging in acquireDavid Kalnischkies
Introduced in 9d2a8a7388cf3b0bbbe92f6b0b30a533e1167f40 apt tries to merge actions like downloading the same (as judged by hashes) file into doing it once. The implementation was very simple in that it isn't planing at all. Turns out that it works 90% of the time just fine, but has issues in more complicated situations in which items can be in different stages downloading different files emitting potentially the "wrong" hash – like while pdiffs are worked on we might end up copying the patch instead of the result file giving us very strange errors in return. Reverting the change until we can implement a better planing solution seems to be the best course of action even if its sad. Closes: 810046
2016-01-07acquire: Allow parallelizing methods without hostsJulian Andres Klode
The maximum parallelization soft limit is the number of CPU cores * 2 on systems defining _SC_NPROCESSORS_ONLN. The hard limit in all cases is Acquire::QueueHost::Limit.
2015-12-07Use 0llu instead of 0ull in one place tooJulian Andres Klode
Gbp-Dch: ignore
2015-12-07Avoid overflow when summing up file sizesJulian Andres Klode
We need to pass 0llu instead of 0 as the init value, otherwise std::accumulate will calculate with ints. Reported-by: Raphaël Hertzog
2015-11-27Check if the Apt::Sandbox::User exists in CheckDropPrivsMustBeDisabled()Michael Vogt
If it does not exist disabled priv dropping as there is nothing we can drop to. This will unblock people with special chroots or systems that deleted the "_apt" user. Closes: #806406
2015-11-27Deal with killed acquire methods properly instead of hangingMichael Vogt
This fixes a regression caussed by commit 95278287f4e1eeaf5d96749d6fc9bfc53fb400d0 that moved the error detection of RunFds() later into the loop. However this broke detecting issues like dead acquire methods. Instead of relying on the global error state (which is bad) we now pass a boolean value back from RunFds() and break on false. Closes: #806406
2015-11-19ignore lost+found in private directory cleanupDavid Kalnischkies
In ce1f3a2c we started warning about failing unlinking, which we consistently do for directories. That isn't a problem as directories usually aren't in the places we do want to clean up – with the potential exeception of "lost+found", so lets ignore it like we ignore our own partial/ subdirectory. Closes: 805424
2015-11-19do not use _apt for file/copy sources if it isn't world-accessibleDavid Kalnischkies
In 0940230d we started dropping privileges for file (and a bit later for copy, too) with the intend of uniforming this for all methods. The commit message says that the source will likely fail based on the compressors already – and there isn't much secret in the repository content. After all, after apt has run the update everyone can access the content via apt anyway… There are sources through which worked before which are mostly single-deb (and those with the uncompressed files available). The first one being especially surprising for users maybe, so instead of failing, we make it so that apt detects that it can't access a source as _apt and if so doesn't drop (for all sources!) privileges – but we limit this to file/copy, so the uncompress which might be needed will still fail – but that failed before this regression. We display a notice about this, mostly so that if it still fails (e.g. compressed) the user has some idea what is wrong. Closes: 805069
2015-11-04wrap every unlink call to check for != /dev/nullDavid Kalnischkies
Unlinking /dev/null is bad, we shouldn't do that. Also, we should print at least a warning if we tried to unlink a file but didn't manage to pull it of (ignoring the case were the file is /dev/null or doesn't exist in the first place). This got triggered by a relatively unlikely to cause problem in pkgAcquire::Worker::PrepareFiles which would while temporary uncompressed files (which are set to keep compressed) figure out that to files are the same and prepare for sharing by deleting them. Bad move. That also shows why not printing a warning is a bad idea as this hide the error for in non-root test runs. Git-Dch: Ignore
2015-09-14use std-algorithms instead of manual loops to avoid overflow warningDavid Kalnischkies
Reported-By: gcc Understandable: no Git-Dch: Ignore
2015-09-14fix 'Dead assignment' by dropping unneeded booleanDavid Kalnischkies
Reported-By: scan-build Git-Dch: Ignore
2015-09-14avoid using global PendingError to avoid failing too often too soonDavid Kalnischkies
Our error reporting is historically grown into some kind of mess. A while ago I implemented stacking for the global error which is used in this commit now to wrap calls to functions which do not report (all) errors via return, so that only failures in those calls cause a failure to propergate down the chain rather than failing if anything (potentially totally unrelated) has failed at some point in the past. This way we can avoid stopping the entire acquire process just because a single source produced an error for example. It also means that after the acquire process the cache is generated – even if the acquire process had failures – as we still have the old good data around we can and should generate a cache for (again). There are probably more instances of this hiding, but all these looked like the easiest to work with and fix with reasonable (aka net-positive) effects.
2015-09-01improve CheckDropPrivsMustBeDisabled furtherDavid Kalnischkies
Various smaller improvements so that the check deals better with already downloaded files, relative paths and other things. Git-Dch: Ignore
2015-08-31if file is inaccessible for _apt, disable privilege drop in acquireDavid Kalnischkies
We had a very similar method previously for our own private usage, but with some generalisation we can move this check into the acquire system proper so that all frontends profit from this compatibility change. As we are disabling a security feature here a warning is issued and frontends are advised to consider reworking their download logic if possible. Note that this is implemented as an all or nothing situation: We can't just (not) drop privileges for a subset of the files in a fetcher, so in case you have to download some files with and some without you need to use two fetchers.
2015-08-17Fix all the wrong removals of includes that iwyu got wrongMichael Vogt
Git-Dch: ignore
2015-08-17Cleanup includes after running iwyuMichael Vogt
2015-08-10make all d-pointer * const pointersDavid Kalnischkies
Doing this disables the implicit copy assignment operator (among others) which would cause hovac if used on the classes as it would just copy the pointer, not the data the d-pointer points to. For most of the classes we don't need a copy assignment operator anyway and in many classes it was broken before as many contain a pointer of some sort. Only for our Cacheset Container interfaces we define an explicit copy assignment operator which could later be implemented to copy the data from one d-pointer to the other if we need it. Git-Dch: Ignore
2015-08-10apply various style suggestions by cppcheckDavid Kalnischkies
Some of them modify the ABI, but given that we prepare a big one already, these few hardly count for much. Git-Dch: Ignore
2015-06-16add d-pointer, virtual destructors and de-inline de/constructorsDavid Kalnischkies
To have a chance to keep the ABI for a while we need all three to team up. One of them missing and we might loose, so ensuring that they are available is a very tedious but needed task once in a while. Git-Dch: Ignore
2015-06-15allow ratelimiting progress reporting for testcasesDavid Kalnischkies
Progress reports once in a while which is a bit to unpredictable for testcases, so we enforce a steady progress for them in the hope that this makes the tests (mostly test-apt-progress-fd) a bit more stable. Git-Dch: Ignore
2015-06-15condense parallel requests with the same hashes to oneDavid Kalnischkies
It shouldn't be too common, but sometimes people have multiple mirrors in the sources or otherwise repositories with the same content. Now that we gracefully can handle multiple requests to the same URI, we can also fold multiple requests with the same expected hashes into one. Note that this isn't trying to find oppertunities for merging, but just merges if it happens to encounter the oppertunity for it. This is most obvious in the new testcase actually as it needs to delay the action to give the acquire system enough time to figure out that they can be merged.
2015-06-15deal better with acquiring the same URI multiple timesDavid Kalnischkies
This is an unlikely event for indexes and co, but it can happen quiet easily e.g. for changelogs where you want to get the changelogs for multiple binary package(version)s which happen to all be built from a single source. The interesting part is that the Acquire system actually detected this already and set the item requesting the URI again to StatDone - expect that this is hardly sufficient: an Item must be Complete=true as well to be considered truely done and that is only the tip of the ::Done handling iceberg. So instead of this StatDone hack we allow QItems to be owned by multiple items and notify all owners about everything now, so that for the point of each item they got it downloaded just for them.
2014-11-18create our cache and lib directory always with mode 755David Kalnischkies
We autocreate for a while now the last two directories in /var/lib/apt/lists (similar for /var/cache/apt/archives) which is very nice for systems having any of those on tmpfs or other non-persistent storage. This also means though that this creation is effected by the default umask, so for people with aggressive umasks like 027 the directories will be created with 750, which means all non-root users are left out, which is usually exactly what we want then this umask is set, but the cache and lib directories contain public knowledge. There isn't any need to protect them from viewers and they render apt completely useless if not readable.
2014-11-09use pkgAcquire::GetLock instead of own codeDavid Kalnischkies
Do the same with less code in apt-get. This especially ensures that the lock file (and the parent directories) exist before we are trying to lock. It also means that clean now creates the directories if they are missing so we returned to a proper clean state now. Git-Dch: Ignore
2014-10-23chown finished partial files earlierDavid Kalnischkies
partial files are chowned by the Item baseclass to let the methods work with them. Now, this baseclass is also responsible for chowning the files back to root instead of having various deeper levels do this. The consequence is that all overloaded Failed() methods now call the Item::Failed base as their first step. The same is done for Done(). The effect is that even in partial files usually don't belong to _apt anymore, helping sneakernets and reducing possibilities of a bad method modifying files not belonging to them. The change is supported by the framework not only supporting being run as root, but with proper permission management, too, so that privilege dropping can be tested with them.
2014-10-22check that auth.conf exists before chowning itDavid Kalnischkies
Git-Dch: Ignore
2014-10-21Ensure /etc/apt/auth.conf has _apt:root ownerMichael Vogt
Ensure in SetupAPTPartialDirectory() that the /etc/apt/auth.conf file can be read by the priv sep apt methods.