summaryrefslogtreecommitdiff
path: root/test/integration
AgeCommit message (Collapse)Author
2015-08-10handle site-changing redirects as mirror changesDavid Kalnischkies
Redirectors like httpredir.debian.org orchestra the download from multiple (hopefully close) mirrors while having only a single central sources.list entry by using redirects. This has the effect that the progress report always shows the source it started with, not the mirror it ends up fetching from, which is especially problematic for error reporting as having a report for a "Hashsum mismatch" for the redirector URI is next to useless as nobody knows which URI it was really fetched from (regardless of it coming from a user or via the report script) from this output alone. You would need to enable debug output and hope for the same situation to arise again… We hence reuse the UsedMirror field of the mirror:// method and detect redirects which change the site and declare this new site as the UsedMirrror (and adapt the description). The disadvantage is that there is no obvious mapping anymore (it is relatively easy to guess through with some experience) from progress lines to sources.list lines, so error messages need to take care to use the Target description (rather than current Item description) if they want to refer to the sources.list entry.
2015-08-10implement Signed-By without using gpg for verificationDavid Kalnischkies
The previous commit returns to the possibility of using just gpgv for verification proposes. There is one problem through: We can't enforce a specific keyid without using gpg, but our acquire method can as it parses gpgv output anyway, so it can deal with good signatures from not expected signatures and treats them as unknown keys instead. Git-Dch: Ignore
2015-08-10merge keyrings with cat instead of gpg in apt-keyDavid Kalnischkies
If all keyrings are simple keyrings we can merge the keyrings with cat rather than doing a detour over gpg --export | --import (see #790665), which means 'apt-key verify' can do without gpg and just use gpgv as before the merging change. We declare this gpgv usage explicit now in the dependencies. This isn't a new dependency as gnupg as well as debian-archive-keyring depend on and we used it before unconditionally, just that we didn't declare it. The handling of the merged keyring needs to be slightly different as our merged keyring can end up containing the same key multiple times, but at least currently gpg does remove only the first occurrence with --delete-keys, so we move the handling to a if one is gone, all are gone rather than an (implicit) quid pro quo or even no effect. Thanks: Daniel Kahn Gillmor for the suggestion
2015-08-10support gpg 2.1.x in apt-keyDavid Kalnischkies
The output of gpg slightly changes in 2.1 which breaks the testcase, but the real problem is that this branch introduces a new default keyring format (which is called keybox) and mixing it with simple keyrings (the previous default format) has various problems like failing in the keybox to keyring import (#790665) or [older] gpgv versions not being able to deal with keyboxes (and newer versions as well currently: https://bugs.gnupg.org/gnupg/issue2025). We fix this by being a bit more careful in who creates keyrings (aka: we do it or we take a simple keyring as base) to ensure we always have a keyring instead of a keybox. This way we can ensure that any version combination of gpv/gpgv2 and gnupg/gnupg2 without doing explicit version checks and use the same code for all of them. Closes: 781042
2015-08-10allow individual targets to be kept compressedDavid Kalnischkies
There is an option to keep all targets (Packages, Sources, …) compressed for a while now, but the all-or-nothing approach is a bit limited for our purposes with additional targets as some of them are very big (Contents) and rarely used in comparison, so keeping them compressed by default can make sense, while others are still unpacked. Most interesting is the copy-change maybe: Copy is used by the acquire system as an uncompressor and it is hence expected that it returns the hashes for the "output", not the input. Now, in the case of keeping a file compressed, the output is never written to disk, but generated in memory and we should still validated it, so for compressed files copy is expected to return the hashes of the uncompressed file. We used to use the config option to enable on-the-fly decompress in the method, but in reality copy is never used in a way where it shouldn't decompress a compressed file to get its hashes, so we can save us the trouble of sending this information to the method and just do it always.
2015-08-10implement Signed-By option for sources.listDavid Kalnischkies
Limits which key(s) can be used to sign a repository. Not immensely useful from a security perspective all by itself, but if the user has additional measures in place to confine a repository (like pinning) an attacker who gets the key for such a repository is limited to its potential and can't use the key to sign its attacks for an other (maybe less limited) repository… (yes, this is as weak as it sounds, but having the capability might come in handy for implementing other stuff later).
2015-08-10add sources.list Check-Valid-Until and Valid-Until-{Max,Min} optionsDavid Kalnischkies
These options could be set via configuration before, but the connection to the actual sources is so strong that they should really be set in the sources.list instead – especially as this can be done a lot more specific rather than e.g. disabling Valid-Until for all sources at once. Valid-Until-* names are chosen instead of the Min/Max-ValidTime as this seems like a better name and their use in the wild is probably low enough that this isn't going to confuse anyone if we have to names for the same thing in different areas. In the longrun, the config options should be removed, but for now documentation hinting at the new options is good enough as these are the kind of options you set once across many systems with different apt versions, so the new way should work everywhere first before we deprecate the old way.
2015-08-10merge indexRecords into metaIndexDavid Kalnischkies
indexRecords was used to parse the Release file – mostly the hashes – while metaIndex deals with downloading the Release file, storing all indexes coming from this release and … parsing the Release file, but this time mostly for the other fields. That wasn't a problem in metaIndex as this was done in the type specific subclass, but indexRecords while allowing to override the parsing method did expect by default a specific format. APT isn't really supporting different types at the moment, but this is a violation of the abstraction we have everywhere else and, which is the actual reason for this merge: Options e.g. coming from the sources.list come to metaIndex naturally, which needs to wrap them up and bring them into indexRecords, so the acquire system is told about it as they don't get to see the metaIndex, but they don't really belong in indexRecords as this is just for storing data loaded from the Release file… the result is a complete mess. I am not saying it is a lot prettier after the merge, but at least adding new options is now slightly easier and there is just one place responsible for parsing the Release file. That can't hurt.
2015-08-10detect and error out on conflicting Trusted settingsDavid Kalnischkies
A specific trust state can be enforced via a sources.list option, but it effects all entries handled by the same Release file, not just the entry it was given on so we enforce acknowledgement of this by requiring the same value to be (not) set on all such entries.
2015-08-10bring back deb822 sources.list entries as .sourcesDavid Kalnischkies
Having two different formats in the same file is very dirty and causes external tools to fail hard trying to parse them. It is probably not a good idea for them to parse them in the first place, but they do and we shouldn't break them if there is a better way. So we solve this issue for now by giving our deb822 format a new filename extension ".sources" which unsupporting applications are likely to ignore an can begin gradually moving forward rather than waiting for the unknown applications to catch up. Currently and for the forseeable future apt is going to support both with the same feature set as documented in the manpage, with the longtime plan of adopting the 'new' format as default, but that is a long way to go and might get going more from having an easier time setting options than from us pushing it explicitely.
2015-08-10support lang= and target= sources.list optionsDavid Kalnischkies
We support arch= for a while, now we finally add lang= as well and as a first simple way of controlling which targets to acquire also target=. This asked for a redesign of the internal API of parsing and storing information about 'deb' and 'deb-src' lines. As this API isn't visible to the outside no damage done through. Beside being a nice cleanup (= it actually does more in less lines) it also provides us with a predictable order of architectures as provides in the configuration rather than based on string sorting-order, so that now the native architecture is parsed/displayed first. Observeable e.g. in apt-get output.
2015-08-10fix memory leaks reported by -fsanitizeDavid Kalnischkies
Various small leaks here and there. Nothing particularily big, but still good to fix. Found by the sanitizers while running our testcases. Reported-By: gcc -fsanitize Git-Dch: Ignore
2015-08-10Fix test case breakage from the new policy implementationJulian Andres Klode
Everything's working now.
2015-06-15allow ratelimiting progress reporting for testcasesDavid Kalnischkies
Progress reports once in a while which is a bit to unpredictable for testcases, so we enforce a steady progress for them in the hope that this makes the tests (mostly test-apt-progress-fd) a bit more stable. Git-Dch: Ignore
2015-06-15condense parallel requests with the same hashes to oneDavid Kalnischkies
It shouldn't be too common, but sometimes people have multiple mirrors in the sources or otherwise repositories with the same content. Now that we gracefully can handle multiple requests to the same URI, we can also fold multiple requests with the same expected hashes into one. Note that this isn't trying to find oppertunities for merging, but just merges if it happens to encounter the oppertunity for it. This is most obvious in the new testcase actually as it needs to delay the action to give the acquire system enough time to figure out that they can be merged.
2015-06-15show item ID in Hit, Ign and Err lines as wellDavid Kalnischkies
Again, consistency is the main sellingpoint here, but this way it is now also easier to explain that some files move through different stages and lines are printed for them hence multiple times: That is a bit hard to believe if the number is changing all the time, but now that it keeps consistent.
2015-06-15call URIStart in cdrom and file methodDavid Kalnischkies
All other methods call it, so they should follow along even if the work they do afterwards is hardly breathtaking and usually results in a URIDone pretty soon, but the acquire system tells the individual item about this via a virtual method call, so even through none of our existing items contains any critical code in these, maybe one day they might. Consistency at least once… Which is also why this has a good sideeffect: file: and cdrom: requests appear now in the 'apt-get update' output. Finally - it never made sense to hide them for me. Okay, I guess it made before the new hit behavior, but now that you can actually see the difference in an update it makes sense to see if a file: repository changed or not as well.
2015-06-15deal better with acquiring the same URI multiple timesDavid Kalnischkies
This is an unlikely event for indexes and co, but it can happen quiet easily e.g. for changelogs where you want to get the changelogs for multiple binary package(version)s which happen to all be built from a single source. The interesting part is that the Acquire system actually detected this already and set the item requesting the URI again to StatDone - expect that this is hardly sufficient: an Item must be Complete=true as well to be considered truely done and that is only the tip of the ::Done handling iceberg. So instead of this StatDone hack we allow QItems to be owned by multiple items and notify all owners about everything now, so that for the point of each item they got it downloaded just for them.
2015-06-15provide a public interface for acquiring changelogsDavid Kalnischkies
Provided is a specialized acquire item which given a version can figure out the correct URI to try by itself and if not provides an error message alongside with static methods to get just the URI it would try to download if it should just be displayed or similar such. The URI is constructed as follows: Release files can provide an URI template in the "Changelogs" field, otherwise we lookup a configuration item based on the "Label" or "Origin" of the Release file to get a (hopefully known) default value for now. This template should contain the string CHANGEPATH which is replaced with the information about the version we want the changelog for (e.g. main/a/apt/apt_1.1). This middleway was choosen as this path part was consistent over the three known implementations (+1 defunct), while the rest of the URI varies widely between them. The benefit of this construct is that it is now easy to get changelogs for Debian packages on Ubuntu and vice versa – even at the moment where the Changelogs field is present nowhere. Strictly better than what apt-get had before as it would even fail to get changelogs from security… Now it will notice that security identifies as Origin: Debian and pick this setting (assuming again that no Changelogs field exists). If on the other hand security would ship its changelogs in a different location we could set it via the Label option overruling Origin. Closes: 687147, 739854, 784027, 787190
2015-06-15implement default apt-get file --release-info modeDavid Kalnischkies
Selecting targets based on the Release they belong to isn't to unrealistic. In fact, it is assumed to be the most used case so it is made the default especially as this allows to bundle another thing we have to be careful with: Filenames and only showing targets we have acquired. Closes: 752702
2015-06-12store Release files data in the CacheDavid Kalnischkies
We used to read the Release file for each Packages file and store the data in the PackageFile struct even through potentially many Packages (and Translation-*) files could use the same data. The point of the exercise isn't the duplicated data through. Having the Release files as first-class citizens in the Cache allows us to properly track their state as well as allows us to use the information also for files which aren't in the cache, but where we know to which Release file they belong (Sources are an example for this). This modifies the pkgCache structs, especially the PackagesFile struct which depending on how libapt users access the data in these structs can mean huge breakage or no visible change. As a single data point: aptitude seems to be fine with this. Even if there is breakage it is trivial to fix in a backportable way while avoiding breakage for everyone would be a huge pain for us. Note that not all PackageFile structs have a corresponding ReleaseFile. In particular the dpkg/status file as well as *.deb files have not. As these have only a Archive property need, the Component property takes over this duty and the ReleaseFile remains zero. This is also the reason why it isn't needed nor particularily recommended to change from PackagesFile to ReleaseFile blindly. Sticking with the earlier is usually the better option.
2015-06-11implement 'apt-get files' to access index targetsDavid Kalnischkies
Downloading additional files is only half the job. We still need a way to allow external tools to know where the files are they requested for download given that we don't want them to choose their own location. 'apt-get files' is our answer to this showing by default in a deb822 format information about each IndexTarget with the potential to filter the records based on lines and an option to change the output format. The command serves also as an example on how to get to this information via libapt.
2015-06-11show URI.Path in all acquire item descriptionsDavid Kalnischkies
It is a rather strange sight that index items use SiteOnly which strips the Path, while e.g. deb files are downloaded with NoUserPassword which does not. Important to note here is that for the file transport Path is pretty important as there is no Host which would be displayed by Site, which always resulted in "interesting" unspecific errors for "file:". Adding a 'middle' ground between the two which does show the Path but potentially modifies it (it strips a pending / at the end if existing) solves this "file:" issue, syncs the output and in the end helps to identify which file is meant exactly in progress output and co as a single site can have multiple repositories in different paths.
2015-06-10store all targets data in IndexTarget structDavid Kalnischkies
We still need an API for the targets, so slowly prepare the IndexTargets to let them take this job. Git-Dch: Ignore
2015-06-10abstract the code to iterate over all targets a bitDavid Kalnischkies
We have two places in the code which need to iterate over targets and do certain things with it. The first one is actually creating these targets for download and the second instance pepares certain targets for reading. Git-Dch: Ignore
2015-06-09configureable acquire targets to download additional filesDavid Kalnischkies
First pass at making the acquire system capable of downloading files based on configuration rather than hardcoded entries. It is now possible to instruct 'deb' and 'deb-src' sources.list lines to download more than just Packages/Translation-* and Sources files. Details on how to do that can be found in the included documentation file.
2015-06-09do not request files if we expect an IMS hitDavid Kalnischkies
If we have a file on disk and the hashes are the same in the new Release file and the old one we have on disk we know that if we ask the server for the file, we will at best get an IMS hit – at worse the server doesn't support this and sends us the (unchanged) file and we have to run all our checks on it again for nothing. So, we can save ourselves (and the servers) some unneeded requests if we figure this out on our own.
2015-06-09support hashes for compressed pdiff filesDavid Kalnischkies
At the moment we only have hashes for the uncompressed pdiff files, but via the new '$HASH-Download' field in the .diff/Index hashes can be provided for the .gz compressed pdiff file, which apt will pick up now and use to verify the download. Now, we "just" need a buy in from the creators of repositories…
2015-06-09fix download-file using testcases to run as rootDavid Kalnischkies
Git-Dch: Ignore
2015-06-09add more parsing error checking for rredDavid Kalnischkies
The rred parser is very accepting regarding 'invalid' files. Given that we can't trust the input it might be a bit too relaxed. In any case, checking for more errors can't hurt given that we support only a very specific subset of ed commands.
2015-06-09check patch hashes in rred worker instead of in the handlerDavid Kalnischkies
rred is responsible for unpacking and reading the patch files in one go, but we currently only have hashes for the uncompressed patch files, so the handler read the entire patch file before dispatching it to the worker which would read it again – both with an implicit uncompress. Worse, while the workers operate in parallel the handler is the central orchestration unit, so having it busy with work means the workers do (potentially) nothing. This means rred is working with 'untrusted' data, which is bad. Yet, having the unpack in the handler meant that the untrusted uncompress was done as root which isn't better either. Now, we have it at least contained in a binary which we can harden a bit better. In the long run, we want hashes for the compressed patch files through to be safe.
2015-06-09rework hashsum verification in the acquire systemDavid Kalnischkies
Having every item having its own code to verify the file(s) it handles is an errorprune process and easy to break, especially if items move through various stages (download, uncompress, patching, …). With a giant rework we centralize (most of) the verification to have a better enforcement rate and (hopefully) less chance for bugs, but it breaks the ABI bigtime in exchange – and as we break it anyway, it is broken even harder. It shouldn't effect most frontends as they don't deal with the acquire system at all or implement their own items, but some do and will need to be patched (might be an opportunity to use apt on-board material). The theory is simple: Items implement methods to decide if hashes need to be checked (in this stage) and to return the expected hashes for this item (in this stage). The verification itself is done in worker message passing which has the benefit that a hashsum error is now a proper error for the acquire system rather than a Done() which is later revised to a Failed().
2015-06-07don't try other compressions on hashsum mismatchDavid Kalnischkies
If we e.g. fail on hash verification for Packages.xz its highly unlikely that it will be any better with Packages.gz, so we just waste download bandwidth and time. It also causes us always to fallback to the uncompressed Packages file for which the error will finally be reported, which in turn confuses users as the file usually doesn't exist on the mirrors, so a bug in apt is suspected for even trying it…
2015-05-22Merge branch 'debian/sid' into debian/experimentalMichael Vogt
Conflicts: apt-pkg/pkgcache.h debian/changelog methods/https.cc methods/server.cc test/integration/test-apt-download-progress
2015-05-22Merge remote-tracking branch 'upstream/debian/jessie' into debian/sidMichael Vogt
Conflicts: apt-pkg/deb/dpkgpm.cc
2015-05-22Add regression test for LP: #1445239Michael Vogt
Add a regression test that reproduced the hang of apt when a partial file is present. Git-Dch: ignore
2015-05-18treat older Release files than we already have as an IMSHitDavid Kalnischkies
Valid-Until protects us from long-living downgrade attacks, but not all repositories have it and an attacker could still use older but still valid files to downgrade us. While this makes it sounds like a security improvement now, its a bit theoretical at best as an attacker with capabilities to pull this off could just as well always keep us days (but in the valid period) behind and always knows which state we have, as we tell him with the If-Modified-Since header. This is also why this is 'silently' ignored and treated as an IMSHit rather than screamed at the user as this can at best be an annoyance for attackers. An error here would 'regularily' be encountered by users by out-of-sync mirrors serving a single run (e.g. load balancer) or in two consecutive runs on the other hand, so it would just help teaching people ignore it. That said, most of the code churn is caused by enforcing this additional requirement. Crisscross from InRelease to Release.gpg is e.g. very unlikely in practice, but if we would ignore it an attacker could sidestep it this way.
2015-05-13detect Releasefile IMS hits even if the server doesn'tDavid Kalnischkies
Not all servers we are talking to support If-Modified-Since and some are not even sending Last-Modified for us, so in an effort to detect such hits we run a hashsum check on the 'old' compared to the 'new' file, we got the hashes for the 'new' already for "free" from the methods anyway and hence just need to calculated the old ones. This allows us to detect hits even with unsupported servers, which in turn means we benefit from all the new hit behavior also here.
2015-05-12detect 416 complete file in partial by expected hashDavid Kalnischkies
If we have the expected hashes we can check with them if the file we have in partial we got a 416 for is the expected file. We detected this with same-size before, but not every server sends a good Content-Range header with a 416 response.
2015-05-11rewrite all TFRewrite instances to use the new pkgTagSection::WriteDavid Kalnischkies
While it is mostly busywork to rewrite all instances it actually fixes bugs as the data storage used by the new method is std::string rather than a char*, the later mostly created by c_str() from a std::string which the caller has to ensure keeps in scope – something apt-ftparchive actually didn't ensure and relied on copy-on-write behavior instead which c++11 forbids and hence the new default gcc abi doesn't use it.
2015-05-11sync TFRewrite*Order arrays with dpkg and dakDavid Kalnischkies
dpkg and dak know various field names and order them in their output, while we have yet another order and have to play catch up with them as we are sitting between chairs here and neither order is ideal for us, too. A little testcase is from now on supposed to help ensureing that we do not derivate to far away from which fields dpkg knows and orders.
2015-05-11remove available file to have same dpkg -l behaviorDavid Kalnischkies
dpkg -l < 1.16.2 loads the available file and hence sees a package which later versions do not see, leading to failures on travis-ci. The different versions also have slightly different messages. Git-Dch: Ignore
2015-05-11remove unused and strange default-value for pinsDavid Kalnischkies
If the pin for a generic pin is 0, it get a value by strange looking rules, if the pin is specific the rules are at least not strange, but the value 989 is a magic number without any direct meaning… but both never happens in practice as the parsing skips such entries with a warning, so there always is a priority != 0 and the code therefore never used.
2015-05-11a pin of 1000 always means downgrade allowedDavid Kalnischkies
The documentation says this, but the code only agreed while evaluating specific packages, but not generics. These needed a pin above 1000 to have the same effect. The code causing this makes references to a 'second pesduo status file', but nowhere is explained what this might stand for and/or what it was, so we do the only reasonable thing: Remove all references and do as documented.
2015-05-11improve partial/ cleanup in abort and failure casesDavid Kalnischkies
Especially pdiff-enhanced downloads have the tendency to fail for various reasons from which we can recover and even a successful download used to leave the old unpatched index in partial/. By adding a new method responsible for making the transaction of an individual file happen we can at specialisations especially for abort cases to deal with the cleanup. This also helps in keeping the compressed indexes around if another index failed instead of keeping the decompressed files, which we wouldn't pick up in the next call.
2015-04-22remove "first package seen is native package" assumptionDavid Kalnischkies
The fix for #777760 causes packages of foreign (and the native) architectures, to be created correctly, but invalidates (like the previously existing, but policy-forbidden architecture-less packages we had to support for some upgrade scenarios) the assumption that the first (and only) package in the cache for a single architecture system must be the package for the native architecture (as, where should the other architectures come from, right? Wrong.). Depending on the order of parsing sources more or less packages can be effected by this. The effects are strange (for apt it mostly effects simulation/debug output, but also apt-mark on these specific packages), which complicates debugging, but relatively harmless if understood as most actions do not need direct named access to packages. The problem is fixed by removing the single-arch special casing in the paths who had them (Cache.FindPkg), so they use the same code as multi-arch systems, which use them as a wrapper for Grp.FindPkg. Note that single-arch system code was using Grp.FindPkg before as well if a Grp structure was handily available, so we don't introduce new untested code here: We remove more brittle special cases which are less tested instead (this was planed to be done for Stretch anyhow). Note further that the method with the assumption itself isn't fixed. As it is a private method I opted for declaring it deprecated instead and remove all its call positions. As it is private no-one can call this method legally (thanks to how c++ works by default its still an exported symbol through) and fixing it basically means reimplementing code we already have in Grp.FindPkg. Removing rather than fixing seems hence like a good solution. Closes: 782777 Thanks: Axel Beckert for testing
2015-04-19Merge branch 'debian/jessie' into debian/experimentalDavid Kalnischkies
Conflicts: apt-pkg/acquire-item.cc cmdline/apt-key.in methods/https.cc test/integration/test-apt-key test/integration/test-multiarch-foreign
2015-04-19a hit on Release files means the indexes will be hits tooDavid Kalnischkies
If we get a IMSHit for the Transaction-Manager (= the InRelease file or as its still supported fallback Release + Release.gpg combo) we can assume that every file we would queue based on this manager, but already have locally is current and hence would get an IMSHit, too. We therefore save us and the server the trouble and skip the queuing in this case. Beside speeding up repetative executions of 'apt-get update' this way we also avoid hitting hashsum errors if the indexes are in fact already updated, but the Release file isn't yet as it is the case on well behaving mirrors as Release files is updated last. The implementation is a bit harder than the theory makes it sound as we still have to keep reverifying the Release files (e.g. to detect now expired once to avoid an attacker being able to silently stale us) and have to handle cases in which the Release file hits, but some indexes aren't present (e.g. user added a new foreign architecture).
2015-04-19ensure lists/ files have correct permissions after apt-cdrom addDavid Kalnischkies
Its a bit unpredictable which permissons and owners we will encounter on a CD-ROM (or a USB stick, as apt-cdrom is responsible for those too), so we have to ensure in this codepath as well that everything is nicely setup without waiting for a 'apt-get update' to fix up the (potential) mess.
2015-04-19calculate hashes while downloading in httpsDavid Kalnischkies
We do this in HTTP already to give the CPU some exercise while the disk is heavily spinning (or flashing?) to store the data avoiding the need to reread the entire file again later on to calculate the hashes – which happens outside of the eyes of progress reporting, so you might ended up with a bunch of https workers 'stuck' at 100% while they were busy calculating hashes. This is a bummer for everyone using apt as a connection speedtest as the https method works slower now (not really, it just isn't reporting done too early anymore).