summaryrefslogtreecommitdiff
path: root/test
AgeCommit message (Collapse)Author
2021-01-13Adjust apt-mark test for dpkg 1.20.7Julian Andres Klode
2021-01-08Implement update --error-on=anyJulian Andres Klode
People have been asking for a feature to error out on transient network errors for a while, this gives them one while keeping the door open for other modes we need, such as --error-on=no-success which we need to determine when to retry the daily update job. Closes: #594813 (and a whole bunch of duplicates...)
2021-01-08Add support for Phased-Update-PercentageJulian Andres Klode
This adds support for Phased-Update-Percentage by pinning upgrades that are not to be installed down to 1. The output of policy has been changed to add the level of phasing, and documentation has been improved to document how phased updates work. The patch detects if it is running in a chroot, and if so, always includes phased updates, restoring classic apt behavior to avoid behavioral changes on buildd chroots. Various options are added to control this all: * APT::Get::{Always,Never}-Include-Phased-Updates and their legacy update-manager equivalents to always or never include phased updates * APT::Machine-ID can be set to a UUID string to have all machines in a fleet phase the same * Dir::Etc::Machine-ID is weird in that it's default is sort of like ../machine-id, but not really, as ../machine-id would look up $PWD/../machine-id and not relative to Dir::Etc; but it allows you to override the path to machine-id (as opposed to the value) * Dir::Bin::ischroot is the path to the ischroot(1) binary which is used to detect whether we are running in a chroot.
2021-01-05Merge branch 'bash-compat' into 'master'Julian Andres Klode
Be compatible with Bash See merge request apt-team/apt!142
2021-01-04Determine autoremovable kernels at run-timeJulian Andres Klode
Our kernel autoremoval helper script protects the currently booted kernel, but it only runs whenever we install or remove a kernel, causing it to protect the kernel that was booted at that point in time, which is not necessarily the same kernel as the one that is running right now. Reimplement the logic in C++ such that we can calculate it at run-time: Provide a function to produce a regular expression that matches all kernels that need protecting, and by changing the default root set function in the DepCache to make use of that expression. Note that the code groups the kernels by versions as before, and then marks all kernel packages with the same version. This optimized version inserts a virtual package $kernel into the cache when building it to avoid having to iterate over all packages in the cache to find the installed ones, significantly improving performance at a minor cost when building the cache. LP: #1615381
2020-12-28Be compatible with BashDemi M. Obenour
On many distributions, /bin/sh is Bash. Bash’s `echo` builtin doesn’t interpret escape sequences, so most tests fail. Fix this by removing the escape sequence.
2020-12-18Don't re-encode encoded URIs in pkgAcqFileDavid Kalnischkies
This commit potentially breaks code feeding apt an encoded URI using a method which does not get URIs send encoded. The webserverconfig requests in our tests are an example for this – but they only worked before if the server was expecting a double encoding as that was what was happening to an encoded URI: so unlikely to work as expected in practice. Now with the new methods we can drop this double encoding and rely on the URI being passed properly (and without modification) between the layers so that passing in encoded URIs should now work correctly.
2020-12-18Implement encoded URI handling in all methodsDavid Kalnischkies
Every method opts in to getting the encoded URI passed along while keeping compat in case we are operated by an older acquire system. Effectively this is just a change for the http-based methods as the others just decode the URI as they work with files directly.
2020-12-18Keep URIs encoded in the acquire systemDavid Kalnischkies
We do not deal a lot with URIs which need encoding, but then we do it is a pain that we store it decoded in the acquire system as it means we have to decode and reencode URIs eventually which is potentially giving us slightly different URIs. We see that in our own testing framework while setting up redirects as the config options are effectively double-encoded and decoded to pass them around successfully as otherwise %2f and / in an URI are treated the same. This commit adds the infrastructure for methods to opt into getting URIs send in encoded form (and returning them to us in encoded form, too) so that we eventually do not have to touch the URIs which is how it should be. This means though that we have to deal with methods who do not support this yet (aka: all at the moment) for which we decode and encode while communicating with them.
2020-12-18Proper URI encoding for config requests to our test webserverDavid Kalnischkies
Our http method encodes the URI again which results in the double encoding we have unwrap in the webserver (we did already, but we skip the filename handling now which does the first decode).
2020-12-15test: fixup for hash table size increase (changed output order)Julian Andres Klode
2020-12-09CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiBJulian Andres Klode
The integer overflow was detected by DonKult who added a check like this: (std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block))) Which deals with the code as is, but also still is a fairly big limit, and could become fragile if we change the code. Let's limit our file sizes to 128 GiB, which should be sufficient for everyone. Original comment by DonKult: The code assumes that it can add sizeof(Block)-1 to the size of the item later on, but if we are close to a 64bit overflow this is not possible. Fixing this seems too complex compared to just ensuring there is enough room left given that we will have a lot more problems the moment we will be acting on files that large as if the item is that large, the (valid) tar including it probably doesn't fit in 64bit either.
2020-12-09CVE-2020-27350: debfile: integer overflow: Limit control size to 64 MiBJulian Andres Klode
Like the code in arfile.cc, MemControlExtract also has buffer overflows, in code allocating memory for parsing control files. Specify an upper limit of 64 MiB for control files to both protect against the Size overflowing (we allocate Size + 2 bytes), and protect a bit against control files consisting only of zeroes.
2020-12-09tarfile: OOM hardening: Limit size of long names/links to 1 MiBJulian Andres Klode
Tarballs have long names and long link targets structured by a special tar header with a GNU extension followed by the actual content (padded to 512 bytes). Essentially, think of a name as a special kind of file. The limit of a file size in a header is 12 bytes, aka 10**12 or 1 TB. While this works OK-ish for file content that we stream to extractors, we need to copy file names into memory, and this opens us up to an OOM DoS attack. Limit the file name size to 1 MiB, as libarchive does, to make things safer.
2020-12-09CVE-2020-27350: arfile: Integer overflow in parsingJulian Andres Klode
GHSL-2020-169: This first hunk adds a check that we have more files left to read in the file than the size of the member, ensuring that (a) the number is not negative, which caused the crash here and (b) ensures that we similarly avoid other issues with trying to read too much data. GHSL-2020-168: Long file names are encoded by a special marker in the filename and then the real filename is part of what is normally the data. We did not check that the length of the file name is within the length of the member, which means that we got a overflow later when subtracting the length from the member size to get the remaining member size. The file createdeb-lp1899193.cc was provided by GitHub Security Lab and reformatted using apt coding style for inclusion in the test case, both of these issues have an automated test case in test/integration/test-ubuntu-bug-1899193-security-issues. LP: #1899193
2020-12-07patterns: Terminate short pattern by ~ and !Julian Andres Klode
This allows patterns like ~nalpha~nbeta and ~nalpha!~nbeta to work like they do in APT. Also add a comment to remind readers that everything in START should be in short too. Cc: stable >= 2.0
2020-12-02test-method-rred: Use apthelper instead of apt-helperJulian Andres Klode
Fixes lookup in as-installed testing Gbp-Dch: ignore
2020-11-25Merge branch 'feature/rred' into 'master'Julian Andres Klode
Enhance rred for possible external usage See merge request apt-team/apt!136
2020-11-07Support compressed output from rred similar to apt-helper cat-fileDavid Kalnischkies
2020-11-07Support reading compressed patches in rred direct call modesDavid Kalnischkies
The acquire system mode does this for a long time already and as it is easy to implement and handy for manual testing as well we can support it in the other modes, too.
2020-11-07Prepare rred binary for external usageDavid Kalnischkies
Merging patches is a bit of non-trivial code we have for client-side work, but as we support also server-side merging we can export this functionality so that server software can reuse it. Note that this just cleans up and makes rred behave a bit more like all our other binaries by supporting setting configuration at runtime and supporting --help and --version. If you can make due without this, the now advertised functionality is provided already in earlier versions.
2020-11-06Do not immediately configure m-a: same packages in lockstepJulian Andres Klode
In LP#835625, it was reported that apt did not unpack multi-arch packages in the correct order, and dpkg did not like that. The fix also made apt configure packages together, which is not strictly necessary. This turned out to cause issues now, because of dependencies on libc6:i386 that caused immediate configuration of that to not work. Work around the issue by not configuring multi-arch: same packages in lockstep if they have the immediate flag set. This will be the pseudo-essential set, and given how essential works, we mostly need the native arch to work correctly anyway. LP: #1871268 Regression-Of: 30426f4822516bdd26528aa2e6d8d69c1291c8d3
2020-10-26pkgnames: Do not exclude virtual packages with --all-namesJulian Andres Klode
We accidentally excluded virtual packages by excluding every group that had a package, but where the package had no versions. Rewrite the code so the lookup consistently uses VersionList() instead of FirstVersion and FindPkg("any") - those are all the same, and this is easier to read.
2020-10-26pkgnames: Correctly set the default for AllNames to falseJulian Andres Klode
We passed "false" instead of false, and that apparently got cast to bool, because it's a non-null pointer. LP: #1876495
2020-08-10Default Acquire::AllowReleaseInfoChange::Suite to "true"Julian Andres Klode
Closes: #931566
2020-08-04Replace whitelist/blacklist with allowlist/denylistJulian Andres Klode
2020-08-04aptwebserver: Rename slaves to workersJulian Andres Klode
Apologies.
2020-07-07Detect pkg-config-dpkghook failure in tests to avoid fallbackDavid Kalnischkies
dpkg (>= 1.20.3) has better support for its own DPKG_ROOT resulting in architectures for the root being reported rather than the host system. Sadly the hookscript from pkg-config is not prepared for this resulting in our `dpkg --add-architecture` calls failing in the hook after dpkg has successfully added the architecture internally. The failure triggered fallback handling in the tests to work with an older version of dpkg with a different multi-arch implementation. So instead of doing the fallback, we ignore the failure if it seems like pkg-config-dpkghook is involved only producing a bunch of warnings to hint at this problem, but otherwise make the tests work again as it is a post-invoke script. References: #824774
2020-07-07Fix test due to display change in ls (coreutils 8.32)David Kalnischkies
The test runs ls on the opened fds and greps the result for 'root root' which is how ls (<= 8.30) used to report user and group for these. Now that Debian contains 8.32 it reports user and group of the process owning them (supposedly). grepping for both unbreaks the test. lr-x------ 1 root root 64 Jul 7 19:07 0 -> 'pipe:[10458045]' lrwx------ 1 root root 64 Jul 7 19:07 1 -> /dev/pts/12 lrwx------ 1 root root 64 Jul 7 19:07 2 -> /dev/pts/12 lr-x------ 1 root root 64 Jul 7 19:07 3 -> /proc/1266484/fd vs (assuming user:group is david:david) lr-x------ 1 david david 64 Jul 7 19:07 0 -> 'pipe:[10458045]' lrwx------ 1 david david 64 Jul 7 19:07 1 -> /dev/pts/12 lrwx------ 1 david david 64 Jul 7 19:07 2 -> /dev/pts/12 lr-x------ 1 david david 64 Jul 7 19:07 3 -> /proc/1266484/fd
2020-07-02Add dependency points in the resolver also to providersDavid Kalnischkies
We were traditionally adding points for some dependency types to the real package, but we should also do it for providers of that package to help the resolver especially if the real package is for some reason not tagged for removal yet/anymore. While at it we ensure that the points are only attributed once for each package as especially with versioned provides a package can nowadays provide another many times and would hence acquire a lot of points.
2020-07-02Filter out impossible solutions for protected propagationDavid Kalnischkies
If the package providing the given solution is tagged already for removal (or at least for "not installing") we can ignore this solution as a possibility as it is not one, which means we can avoid exploring the option and potentially forward the protected flag further if that helps in reducing the possibilities to a single one.
2020-07-02Delay removals due to Conflicts until Depends are resolvedDavid Kalnischkies
Marking a package for removal is fine if we know that we have to remove that package, but if we are in an alternative branch we might not go this route in the end and hence have a package pointlessly marked for removal which isn't questioned later on. We check if we are allowed to remove that package to avoid working on the positive dependencies if not, but we mark them for removal only after all the other dependencies are successfully resolved. In an ideal world we would let the problemResolver do its job on them, but the resolver might decide against doing the removal exploring another option like the next alternative, which might be a good idea, but is not the behaviour we had before, so that is the best we can do for now without changing the resolver drastically.
2020-06-29Add basic support for the Protected fieldJulian Andres Klode
This will be mapped to Important for the time being.
2020-06-14Deduplicate EDSP Provides line of M-A:foreign packagesDavid Kalnischkies
M-A:foreign causes Provides to apply to all architectures and as we wanted to avoid resolver changes for M-A those are done by explicitly creating these provides instead of forcing the resolvers to learn about this. The EDSP is a different beast though & we don't need this trick here especially as it leads to needless (but harmless) duplication. No sort+unique is done to avoid changing order (not that it should matter, but just to be sure), but the sets should be small enough to not make a huge difference either way.
2020-06-14Tell EDSP solvers about all installed pkgs ignoring archDavid Kalnischkies
We usually tell EDSP solvers only about architectures we are configured to treat as native/foreign, but the system could have packages from other architectures installed (even if very unlikely) which could influence the solution (e.g. requiring a removal) so we make sure to tell them.
2020-06-14Do not sent our filename-provides trick to EDSP solversDavid Kalnischkies
If package is installed via an explicitly given deb file we store the filename as a provides, so that the frontend can request the filename and get the usual "Selected foo instead of foo.deb" message. We do not need to trouble the EDSP solvers with that though as these provides are not valid in various ways and we have already solved the link between commandline and package (and version) for them. Closes: #962741
2020-06-03Deal with duplicates in the solution space of a depDavid Kalnischkies
While we process the possible solutions we might modify other solutions like discarding their candidates and such, so that then we reach them they might no longer be proper candidates. We also try to drop duplicates early on to avoid the simple cases of these which test-explore-or-groups-in-markinstall triggers via its explicit duplication but could also come via multiple provides. It only worked previously as were ignoring current versions which usually is okay expect if they are marked for removal and we want to reinstate them so the ProblemResolver can decide which one later on.
2020-06-02Consider if a fix is successful before claiming it isDavid Kalnischkies
For protected packages the "Fixing" done via KillList in the ProblemResolver will usually not happen as the state change is not allowed, so the debug message is just confusing and the resolver is needlessly looping here (which might push it over the edge), so if we didn't do our thing successfully here we short-circuit a bit to help the next iteration come to a solution.
2020-05-29Consider protected packages for removal if they are marked as suchDavid Kalnischkies
The pkgProblemResolver incorrectly skips protected packages while considering packages for removal, which was always wrong but is now a lot more visible as (potentially) far more packages are considered protected in their state. Note that the testcase shows that we need more changes to make this proper.
2020-05-23Keep status number if candidate is discarded for kept back displayDavid Kalnischkies
It looks like hack and therefore I wanted this to be a very isolated commit so we can find it & revert it easily if need be, but for now it seems to work. The idea is that Status is telling us how the candidate is in relation to the current installed version which is used to figure out if a package is "kept back" by the algorithm or not, but by discarding the candidate version we loose this information. Ideally we would keep better tabs on what we do to a package and why, but for now that seems okayish. It will cause the wrong version to be displayed though as if the package is installed the installed version becomes the candidate and hence (installed => installed) is displayed.
2020-05-23Reset candidate version explicitly for internal state-keepingDavid Kalnischkies
For a (partially) installed package like the one MarkInstall operates on at the moment we want to discard the candidate from, we have to first remove the package from the internal state keeping to have proper broken counts and such and only then reset the candidate version which is a trivial operation in comparison. Take a look at the testcase: Now, what is the problem? Correct, git:i386. Didn't see that coming, right? It is M-A:foreign so apt tries to switch the architecture of git here (which is pointless, it knows that this won't work, but lets fix that in another commit) will eventually realize that it can't install it and wants to discard the candidate of git:i386 first removing the broken indication like it should, removing the install flag and then reapplies the broken indication: Expect it doesn't as it wants to do that over the candidate version which the package no longer had so seemingly nothing is broken. It is a bit of a hairball to figure out which commit it is exactly that is wrong here as they are all influencing each other a bit, but >= 2.1 is an acceptable ballpark. Bisect says 57df273 but that is mostly a lie. Closes: #961266
2020-05-19Check satisfiability for versioned provides, not providing versionDavid Kalnischkies
References: dcdfb4723a9969b443d1c823d735e192c731df69
2020-05-18Recognize propagated protected in pkgProblemResolverDavid Kalnischkies
Turns out that pkgDepCache and pkgProblemResolver maintain two (semi) independent sets of protected flags – except that a package if marked protected in the pkgProblemResolver is automatically also marked in the pkgDepCache as protected. This way the pkgProblemResolver will have as protected only the direct user requests while pkgDepCache will (hopefully) propagate the flag to unavoidable dependencies of these requests nowadays. The pkgProblemResolver was only checking his own protected flag though and based on that calls our Mark* methods usually without checking return, leading to it believing it could e.g. remove packages it actually can't remove as pkgDepCache will not allow it as it is marked as protected there. Teaching it to check for the flag in the pkgDepCache instead avoids it believing in the wrong things eventually giving up. The scoring is keeping the behaviour of adding the large score boost only for the direct user requests though as there is no telling which other sideeffects this might have if too many packages get too many points from the get-go. Second part of fixing #960705, now with pkgProblemResolver output which looks more like the whole class of problem is resolved rather than a teeny tiny edgecase it was before.
2020-05-18Propagate protected to already satisfied dependenciesDavid Kalnischkies
The previous commit deals with negative, now we add the positive side of things as well which makes this a recursive endevour. As we can push the protected flag forward only if a single solution for a dependency exists it is easy for trees to not get it, so if resolving becomes difficult it won't help at all.
2020-05-18Propagate protected to already satisfied conflictsDavid Kalnischkies
If we propagate protected e.g. due to a user request we should also act upon (at the moment) satisfied negative dependencies so that the resolver knows that installing this package later is not an option. That the problem resolver is trying bad solutions is a bug by itself which existed before and after and should be worked on. Closes: #960705
2020-05-18Keep going if a dep is bad for user requests to improve errorsDavid Kalnischkies
We exit early from installing dependencies of a package only if it is not a user request to avoid polluting the state with installs which might not be needed (or detrimental even) for alternative choices. We do continue with installing dependencies though if it is a user request as it will improve error reporting for apt and can even help aptitude not hang itself so much as we trim the problem space down for its resolver dealing with all the easy things. Similar things can be said about the testcase I have short-circuit previously… keep going test, do what you should do to report errors!
2020-05-18Allow prefix to be a complete filename for GetTempFileDavid Kalnischkies
Our testcases had their own implementation of GetTempFile with the feature of a temporary file with a choosen suffix. Merging this into GetTempFile lets us drop this duplicate and hence test more our code rather than testing our helpers for test implementation. And then hashsums_test had another implementation… and extracttar wasn't even trying to use a real tempfile… one GetTempFile to rule them all! That also ensures that these tempfiles are created in a temporary directory rather than the current directory which is a nice touch and tries a little harder to clean up those tempfiles.
2020-05-13Fix location of testdeb in added regression testsJulian Andres Klode
2020-05-12SECURITY UPDATE: Fix out of bounds read in .ar and .tar implementation ↵Julian Andres Klode
(CVE-2020-3810) When normalizing ar member names by removing trailing whitespace and slashes, an out-out-bound read can be caused if the ar member name consists only of such characters, because the code did not stop at 0, but would wrap around and continue reading from the stack, without any limit. Add a check to abort if we reached the first character in the name, effectively rejecting the use of names consisting just of slashes and spaces. Furthermore, certain error cases in arfile.cc and extracttar.cc have included member names in the output that were not checked at all and might hence not be nul terminated, leading to further out of bound reads. Fixes Debian/apt#111 LP: #1878177
2020-05-08Allow aptitude to MarkInstall broken packages via FromUserDavid Kalnischkies
apt marks packages coming from the commandline among others as protected to ensure the various resolver parts do not fiddle with the state of these packages. aptitude (and potentially others) do not so the state is modified (to a Keep which for uninstalled means it is not going to be installed) due to being uninstallable before the call fails – basically reverting at least some state changes the call made before it realized it has to fail, which is usually a good idea, except if users expect you to not do it. They do set the FromUser option though which has beside controlling autobit also gained the notion of "the user is always right" over time and can be used for this one here as well preventing the state revert. References: 0de399391372450d0162b5a09bfca554b2d27c3d Reported-By: Jessica Clarke <jrtc27@debian.org> on IRC