summaryrefslogtreecommitdiff
path: root/test/integration/test-apt-update-not-modified
AgeCommit message (Collapse)Author
2017-07-26fail early in http if server answer is too small as wellDavid Kalnischkies
Failing on too much data is good, but we can do better by checking for exact filesizes as we know with hashsums how large a file should be, so if we get a file which has a size we do not expect we can drop it directly, regardless of if the file is larger or smaller than what we expect which should catch most cases which would end up as hashsum errors later now a lot sooner.
2016-11-09rename Checksum-FileSize to Filesize in hashsum mismatchDavid Kalnischkies
Some people do not recognize the field value with such an arcane name and/or expect it to refer to something different (e.g. #839257). We can't just rename it internally as its an avoidance strategy as such fieldname existed previously with less clear semantics, but we can spare the general public from this implementation detail.
2016-06-22add [weak] tag to hash errors to indicate insufficiencyDavid Kalnischkies
For "Hash Sum mismatch" that info doesn't make a whole lot of difference, but for the new insufficient info message an indicator that while this hashes are there and even match, they aren't enough from a security standpoint.
2016-05-04tests: disable generation of Release.gpg by defaultDavid Kalnischkies
Most tests just need a signed repository and don't care if it signed by an InRelease file or a Release.gpg file, so we can save some time by just generating one of them by default. Sounds like not much, but quickly adds up to a few seconds with the amount of tests we have accumulated by now. Git-Dch: Ignore
2016-04-25show more details for "Hash Sum mismatch" errorsDavid Kalnischkies
Users tend to report these errors with just this error message… not very actionable and hard to figure out if this is a temporary or 'permanent' mirror-sync issue or even the occasional apt bug. Showing the involved hashsums and modification times should help in triaging these kind of bugs – and eventually we will have less of them via by-hash. The subheaders aren't marked for translation for now as they are technical glibberish and probably easier to deal with if not translated. After all, our iconic "Hash Sum mismatch" is translated at least. These additions were proposed in #817240 by Peter Palfrader.
2016-03-16Report non-transient errors as errors, not as warningsJulian Andres Klode
This makes it easier to understand what really is an error and what not.
2016-03-16Get accurate progress reporting in apt update againMichael Vogt
For the non-pdiff case, we have can have accurate progress reporting because after fetching the {,In}Release files we know how many IndexFiles will be fetched and what size they have. Therefore init the filesize early (in pkgAcqIndex::Init) and ensure that in Acquire::Pulse() looks at already downloaded bits when calculating the progress in Acquire::Pulse. Also improve debug output of Debug::acquire::progress
2015-12-19tests: support spaces in path and TMPDIRDavid Kalnischkies
This doesn't allow all tests to run cleanly, but it at least allows to write tests which could run successfully in such environments. Git-Dch: Ignore
2015-11-04support arch:all data e.g. in separate Packages fileDavid Kalnischkies
Based on a discussion with Niels Thykier who asked for Contents-all this implements apt trying for all architecture dependent files to get a file for the architecture all, which is treated internally now as an official architecture which is always around (like native). This way arch:all data can be shared instead of duplicated for each architecture requiring the user to download the same information again and again. There is one problem however: In Debian there is already a binary-all/ Packages file, but the binary-any files still include arch:all packages, so that downloading this file now would be a waste of time, bandwidth and diskspace. We therefore need a way to decide if it makes sense to download the all file for Packages in Debian or not. The obvious answer would be a special flag in the Release file indicating this, which would need to default to 'no' and every reasonable repository would override it to 'yes' in a few years time, but the flag would be there "forever". Looking closer at a Release file we see the field "Architectures", which doesn't include 'all' at the moment. With the idea outlined above that 'all' is a "proper" architecture now, we interpret this field as being authoritative in declaring which architectures are supported by this repository. If it says 'all', apt will try to get all, if not it will be skipped. This gives us another interesting feature: If I configure a source to download armel and mips, but it declares it supports only armel apt will now print a notice saying as much. Previously this was a very cryptic failure. If on the other hand the repository supports mips, too, but for some reason doesn't ship mips packages at the moment, this 'missing' file is silently ignored (= that is the same as the repository including an empty file). The Architectures field isn't mandatory through, so if it isn't there, we assume that every architecture is supported by this repository, which skips the arch:all if not listed in the release file.
2015-09-15tests: don't use hardcoded port for http and httpsDavid Kalnischkies
This allows running tests in parallel. Git-Dch: Ignore
2015-09-14avoid using global PendingError to avoid failing too often too soonDavid Kalnischkies
Our error reporting is historically grown into some kind of mess. A while ago I implemented stacking for the global error which is used in this commit now to wrap calls to functions which do not report (all) errors via return, so that only failures in those calls cause a failure to propergate down the chain rather than failing if anything (potentially totally unrelated) has failed at some point in the past. This way we can avoid stopping the entire acquire process just because a single source produced an error for example. It also means that after the acquire process the cache is generated – even if the acquire process had failures – as we still have the old good data around we can and should generate a cache for (again). There are probably more instances of this hiding, but all these looked like the easiest to work with and fix with reasonable (aka net-positive) effects.
2015-08-10drop extra newline in 'Failed to fetch' and 'GPG error' messageDavid Kalnischkies
I never understood why there is an extra newline in those messages, so now is as good time as any to drop them. Lets see if someone complains with a good reason to keep it…
2015-06-15show item ID in Hit, Ign and Err lines as wellDavid Kalnischkies
Again, consistency is the main sellingpoint here, but this way it is now also easier to explain that some files move through different stages and lines are printed for them hence multiple times: That is a bit hard to believe if the number is changing all the time, but now that it keeps consistent.
2015-06-09do not request files if we expect an IMS hitDavid Kalnischkies
If we have a file on disk and the hashes are the same in the new Release file and the old one we have on disk we know that if we ask the server for the file, we will at best get an IMS hit – at worse the server doesn't support this and sends us the (unchanged) file and we have to run all our checks on it again for nothing. So, we can save ourselves (and the servers) some unneeded requests if we figure this out on our own.
2015-06-09rework hashsum verification in the acquire systemDavid Kalnischkies
Having every item having its own code to verify the file(s) it handles is an errorprune process and easy to break, especially if items move through various stages (download, uncompress, patching, …). With a giant rework we centralize (most of) the verification to have a better enforcement rate and (hopefully) less chance for bugs, but it breaks the ABI bigtime in exchange – and as we break it anyway, it is broken even harder. It shouldn't effect most frontends as they don't deal with the acquire system at all or implement their own items, but some do and will need to be patched (might be an opportunity to use apt on-board material). The theory is simple: Items implement methods to decide if hashes need to be checked (in this stage) and to return the expected hashes for this item (in this stage). The verification itself is done in worker message passing which has the benefit that a hashsum error is now a proper error for the acquire system rather than a Done() which is later revised to a Failed().
2015-06-07don't try other compressions on hashsum mismatchDavid Kalnischkies
If we e.g. fail on hash verification for Packages.xz its highly unlikely that it will be any better with Packages.gz, so we just waste download bandwidth and time. It also causes us always to fallback to the uncompressed Packages file for which the error will finally be reported, which in turn confuses users as the file usually doesn't exist on the mirrors, so a bug in apt is suspected for even trying it…
2015-05-18treat older Release files than we already have as an IMSHitDavid Kalnischkies
Valid-Until protects us from long-living downgrade attacks, but not all repositories have it and an attacker could still use older but still valid files to downgrade us. While this makes it sounds like a security improvement now, its a bit theoretical at best as an attacker with capabilities to pull this off could just as well always keep us days (but in the valid period) behind and always knows which state we have, as we tell him with the If-Modified-Since header. This is also why this is 'silently' ignored and treated as an IMSHit rather than screamed at the user as this can at best be an annoyance for attackers. An error here would 'regularily' be encountered by users by out-of-sync mirrors serving a single run (e.g. load balancer) or in two consecutive runs on the other hand, so it would just help teaching people ignore it. That said, most of the code churn is caused by enforcing this additional requirement. Crisscross from InRelease to Release.gpg is e.g. very unlikely in practice, but if we would ignore it an attacker could sidestep it this way.
2015-05-13detect Releasefile IMS hits even if the server doesn'tDavid Kalnischkies
Not all servers we are talking to support If-Modified-Since and some are not even sending Last-Modified for us, so in an effort to detect such hits we run a hashsum check on the 'old' compared to the 'new' file, we got the hashes for the 'new' already for "free" from the methods anyway and hence just need to calculated the old ones. This allows us to detect hits even with unsupported servers, which in turn means we benefit from all the new hit behavior also here.
2015-04-19a hit on Release files means the indexes will be hits tooDavid Kalnischkies
If we get a IMSHit for the Transaction-Manager (= the InRelease file or as its still supported fallback Release + Release.gpg combo) we can assume that every file we would queue based on this manager, but already have locally is current and hence would get an IMSHit, too. We therefore save us and the server the trouble and skip the queuing in this case. Beside speeding up repetative executions of 'apt-get update' this way we also avoid hitting hashsum errors if the indexes are in fact already updated, but the Release file isn't yet as it is the case on well behaving mirrors as Release files is updated last. The implementation is a bit harder than the theory makes it sound as we still have to keep reverifying the Release files (e.g. to detect now expired once to avoid an attacker being able to silently stale us) and have to handle cases in which the Release file hits, but some indexes aren't present (e.g. user added a new foreign architecture).