summaryrefslogtreecommitdiff
path: root/apt-pkg
AgeCommit message (Collapse)Author
2021-01-13pkgcachegen: Avoid write to old cache for Version::ExtraJulian Andres Klode
Assigning the result of AllocateInMap directly to Ver->d caused Ver->d to be resolved first, and hence if Ver was remapped during the AllocateInMap, we were trying to assign to the old value. Closes: #980037
2021-01-11Call ischroot with -tJulian Andres Klode
We interpreted "cannot detect chroot" as "not a chroot", but it's arguably the better idea to detect it as a chroot, to avoid new behavior from phased updations in situations where it's unclear (no /proc mounted or stuff).
2021-01-11kernels: Fix std::out_of_range if no kernels to protectJulian Andres Klode
In case we did not find any kernels to protect, the regular expression will be empty, and trying to substr(1) it will fail.
2021-01-08Merge branch 'pu/small-fixes' into 'master'Julian Andres Klode
Pu/small fixes See merge request apt-team/apt!151
2021-01-08kernels: remove spurious || falseJulian Andres Klode
Gbp-Dch: ignore
2021-01-08Fix getMachineID copy-paste errorJulian Andres Klode
Gbp-Dch: ignore
2021-01-08Implement update --error-on=anyJulian Andres Klode
People have been asking for a feature to error out on transient network errors for a while, this gives them one while keeping the door open for other modes we need, such as --error-on=no-success which we need to determine when to retry the daily update job. Closes: #594813 (and a whole bunch of duplicates...)
2021-01-08Phase using source version to be binNMU-correctJulian Andres Klode
If we have different binNMU versions on different architectures, we don't want madness to ensue. This is a change from how update-manager does things, as Ubuntu does not have binNMUs, but I believe it's the right thing to do for a generic solution.
2021-01-08Add support for Phased-Update-PercentageJulian Andres Klode
This adds support for Phased-Update-Percentage by pinning upgrades that are not to be installed down to 1. The output of policy has been changed to add the level of phasing, and documentation has been improved to document how phased updates work. The patch detects if it is running in a chroot, and if so, always includes phased updates, restoring classic apt behavior to avoid behavioral changes on buildd chroots. Various options are added to control this all: * APT::Get::{Always,Never}-Include-Phased-Updates and their legacy update-manager equivalents to always or never include phased updates * APT::Machine-ID can be set to a UUID string to have all machines in a fleet phase the same * Dir::Etc::Machine-ID is weird in that it's default is sort of like ../machine-id, but not really, as ../machine-id would look up $PWD/../machine-id and not relative to Dir::Etc; but it allows you to override the path to machine-id (as opposed to the value) * Dir::Bin::ischroot is the path to the ischroot(1) binary which is used to detect whether we are running in a chroot.
2021-01-08Merge branch 'pu/optional-immediate' into 'master'Julian Andres Klode
Make immediate configuration optional See merge request apt-team/apt!148
2021-01-08Make immediate configuration optionalJulian Andres Klode
The benefits of immediate configuration are that Essential packages will be configured immediately, so if they are wrongly not working without being configured they won't fail later packages. However, we've reached the point where dependencies on the essential set are too complex for immediate configuration to always work, causing installations to error out at the end, despite having succeeded, because we did not correctly return the error here and did not check for pending errors before running dpkg. Given that we check and configure any packages at the end that have not been configured yet, or fail if we can't configure them; making immediate configuration optional is the best way forward - it orders as it does now, but then does not spuriously fail after having successfully installed everything. Closes: #973305, #188161, #211075, #649588 LP: #1871268
2021-01-07Merge branch 'pu/depends' into 'master'Julian Andres Klode
?depends patterns and friends See merge request apt-team/apt!146
2021-01-04Only keep up to 3 (not 4) kernelsJulian Andres Klode
This fixes a problem on Ubuntu systems where the /boot partition has been sized to manage 3 kernels, but does not really work with 4 kernels which was causing problems all over the place.
2021-01-04Determine autoremovable kernels at run-timeJulian Andres Klode
Our kernel autoremoval helper script protects the currently booted kernel, but it only runs whenever we install or remove a kernel, causing it to protect the kernel that was booted at that point in time, which is not necessarily the same kernel as the one that is running right now. Reimplement the logic in C++ such that we can calculate it at run-time: Provide a function to produce a regular expression that matches all kernels that need protecting, and by changing the default root set function in the DepCache to make use of that expression. Note that the code groups the kernels by versions as before, and then marks all kernel packages with the same version. This optimized version inserts a virtual package $kernel into the cache when building it to avoid having to iterate over all packages in the cache to find the installed ones, significantly improving performance at a minor cost when building the cache. LP: #1615381
2021-01-04depcache: Cache our InRootSetFuncJulian Andres Klode
This avoids the cost of setting up the function every time we mark and sweep.
2020-12-27Implement ?reverse-depends/~R and friendsJulian Andres Klode
This was easy.
2020-12-27patterns: Add dependency patterns ?depends, ?conflicts, etc.Julian Andres Klode
These match the target package, not target versions which is slightly unfortunate but might make sense. Maybe we should add a version that matches Versions instead.
2020-12-18Don't re-encode encoded URIs in pkgAcqFileDavid Kalnischkies
This commit potentially breaks code feeding apt an encoded URI using a method which does not get URIs send encoded. The webserverconfig requests in our tests are an example for this – but they only worked before if the server was expecting a double encoding as that was what was happening to an encoded URI: so unlikely to work as expected in practice. Now with the new methods we can drop this double encoding and rely on the URI being passed properly (and without modification) between the layers so that passing in encoded URIs should now work correctly.
2020-12-18Keep URIs encoded in the acquire systemDavid Kalnischkies
We do not deal a lot with URIs which need encoding, but then we do it is a pain that we store it decoded in the acquire system as it means we have to decode and reencode URIs eventually which is potentially giving us slightly different URIs. We see that in our own testing framework while setting up redirects as the config options are effectively double-encoded and decoded to pass them around successfully as otherwise %2f and / in an URI are treated the same. This commit adds the infrastructure for methods to opt into getting URIs send in encoded form (and returning them to us in encoded form, too) so that we eventually do not have to touch the URIs which is how it should be. This means though that we have to deal with methods who do not support this yet (aka: all at the moment) for which we decode and encode while communicating with them.
2020-12-17Do not require libxxhash-dev for including pkgcachegen.hJulian Andres Klode
2020-12-15Unroll pkgCache::sHash 8 time, break up dependencyJulian Andres Klode
Unroll pkgCache::sHash 8 times and break up the dependency between the iterations by expanding the calculation H(n) = 33 * H(n-1) + c 8 times rather than performing it 8 times. This seems to yield about a 0.4% performance improvement. I tried unrolling 4 and 2 bytes as well, those only having 3 ifs at the end rather than 1 small loop; but that was actually slower - potentially the code got to large and the cache went bonkers. I also tried unrolling 4 times instead of 8, thinking that smaller code might yield better results overall then, but that was slower as well.
2020-12-15Use XXH3 for cache, hash table hashingJulian Andres Klode
XXH3 is faster than both our CRC32c implementation as well as DJB hash for hash table hashing, so meh, let's switch to it.
2020-12-10Raise APT::Cache-HashtableSize to 196613Julian Andres Klode
We now have over 100k package names, my Ubuntu system has 125k arleady, so increase the hash table size to match, this will cost us about a MB in cache size, but give a very nice speed up somewhere around 3%-4% or so.
2020-12-09CVE-2020-27350: tarfile: integer overflow: Limit tar items to 128 GiBJulian Andres Klode
The integer overflow was detected by DonKult who added a check like this: (std::numeric_limits<decltype(Itm.Size)>::max() - (2 * sizeof(Block))) Which deals with the code as is, but also still is a fairly big limit, and could become fragile if we change the code. Let's limit our file sizes to 128 GiB, which should be sufficient for everyone. Original comment by DonKult: The code assumes that it can add sizeof(Block)-1 to the size of the item later on, but if we are close to a 64bit overflow this is not possible. Fixing this seems too complex compared to just ensuring there is enough room left given that we will have a lot more problems the moment we will be acting on files that large as if the item is that large, the (valid) tar including it probably doesn't fit in 64bit either.
2020-12-09CVE-2020-27350: debfile: integer overflow: Limit control size to 64 MiBJulian Andres Klode
Like the code in arfile.cc, MemControlExtract also has buffer overflows, in code allocating memory for parsing control files. Specify an upper limit of 64 MiB for control files to both protect against the Size overflowing (we allocate Size + 2 bytes), and protect a bit against control files consisting only of zeroes.
2020-12-09tarfile: OOM hardening: Limit size of long names/links to 1 MiBJulian Andres Klode
Tarballs have long names and long link targets structured by a special tar header with a GNU extension followed by the actual content (padded to 512 bytes). Essentially, think of a name as a special kind of file. The limit of a file size in a header is 12 bytes, aka 10**12 or 1 TB. While this works OK-ish for file content that we stream to extractors, we need to copy file names into memory, and this opens us up to an OOM DoS attack. Limit the file name size to 1 MiB, as libarchive does, to make things safer.
2020-12-09CVE-2020-27350: arfile: Integer overflow in parsingJulian Andres Klode
GHSL-2020-169: This first hunk adds a check that we have more files left to read in the file than the size of the member, ensuring that (a) the number is not negative, which caused the crash here and (b) ensures that we similarly avoid other issues with trying to read too much data. GHSL-2020-168: Long file names are encoded by a special marker in the filename and then the real filename is part of what is normally the data. We did not check that the length of the file name is within the length of the member, which means that we got a overflow later when subtracting the length from the member size to get the remaining member size. The file createdeb-lp1899193.cc was provided by GitHub Security Lab and reformatted using apt coding style for inclusion in the test case, both of these issues have an automated test case in test/integration/test-ubuntu-bug-1899193-security-issues. LP: #1899193
2020-12-07patterns: Terminate short pattern by ~ and !Julian Andres Klode
This allows patterns like ~nalpha~nbeta and ~nalpha!~nbeta to work like they do in APT. Also add a comment to remind readers that everything in START should be in short too. Cc: stable >= 2.0
2020-12-04HexDigest: Silence -Wstringop-overflowJulian Andres Klode
The compiler does not know that the size is small and thinks we might be doing a stack buffer overflow of the vla: Add APT_ASSUME macro and silence -Wstringop-overflow in HexDigest() The compiler does not know that the size of a hash is at most 512 bit, so tell it that it is. ../apt-pkg/contrib/hashes.cc: In function ‘std::string HexDigest(gcry_md_hd_t, int)’: ../apt-pkg/contrib/hashes.cc:415:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=] 415 | Result[(Size)*2] = 0; | ~~~~~~~~~~~~~~~~~^~~ ../apt-pkg/contrib/hashes.cc:414:9: note: at offset [-9223372036854775808, 9223372036854775807] to an object with size at most 4294967295 declared here 414 | char Result[((Size)*2) + 1]; | ^~~~~~ Fix this by adding a simple assertion. This generates an extra two instructions in the normal code path, so it's not exactly super costly.
2020-11-06Do not immediately configure m-a: same packages in lockstepJulian Andres Klode
In LP#835625, it was reported that apt did not unpack multi-arch packages in the correct order, and dpkg did not like that. The fix also made apt configure packages together, which is not strictly necessary. This turned out to cause issues now, because of dependencies on libc6:i386 that caused immediate configuration of that to not work. Work around the issue by not configuring multi-arch: same packages in lockstep if they have the immediate flag set. This will be the pseudo-essential set, and given how essential works, we mostly need the native arch to work correctly anyway. LP: #1871268 Regression-Of: 30426f4822516bdd26528aa2e6d8d69c1291c8d3
2020-10-21Do not produce late error if immediate configuration fails, just warnJulian Andres Klode
We are seeing more and more installations fail due to immediate configuration issues related to libc6. Immediate configuration is supposed to ensure that an essential package is configured immediately, just in case some other packages use a part of the essential package that only works if that package is configured. This used to be a warning, it was turned into an error in some commit I can't remember right now, but importantly, the error missed a return, which means that ordering completed succesfully and packages were being installed anyway; and after all that happened successfully, we'd print an error at the end and exit with an error code, which is not super useful. Revert the error back to a warning such that the behavior stays the same but we do not fail (unless we mess up ordering which then gets caught by a consistency check later on. Closes: #953260 Closes: #972552 LP: #1871268
2020-08-11acquire: Do not hide _errror messages in Fail()Julian Andres Klode
If we have errors pending, always log them with our failure message to provide more context.
2020-08-10Default Acquire::AllowReleaseInfoChange::Suite to "true"Julian Andres Klode
Closes: #931566
2020-08-04Merge branch 'master' into 'master'Julian Andres Klode
Support marking all newly installed packages as automatically installed See merge request apt-team/apt!110
2020-08-04Merge branch 'pu/less-slaves' into 'master'Julian Andres Klode
Remove master/slave terminology See merge request apt-team/apt!124
2020-08-04Replace whitelist/blacklist with allowlist/denylistJulian Andres Klode
2020-08-04Merge branch 'pu/apt-key-deprecated' into 'master'Julian Andres Klode
Fully deprecate apt-key, schedule removal for Q2/2022 See merge request apt-team/apt!119
2020-07-02Reorder config check before result looping for SRV parsing debugDavid Kalnischkies
It isn't needed to iterate over all results if we will be doing nothing anyhow as it isn't that common to have that debug option enabled.
2020-07-02Add dependency points in the resolver also to providersDavid Kalnischkies
We were traditionally adding points for some dependency types to the real package, but we should also do it for providers of that package to help the resolver especially if the real package is for some reason not tagged for removal yet/anymore. While at it we ensure that the points are only attributed once for each package as especially with versioned provides a package can nowadays provide another many times and would hence acquire a lot of points.
2020-07-02Filter out impossible solutions for protected propagationDavid Kalnischkies
If the package providing the given solution is tagged already for removal (or at least for "not installing") we can ignore this solution as a possibility as it is not one, which means we can avoid exploring the option and potentially forward the protected flag further if that helps in reducing the possibilities to a single one.
2020-07-02Delay removals due to Conflicts until Depends are resolvedDavid Kalnischkies
Marking a package for removal is fine if we know that we have to remove that package, but if we are in an alternative branch we might not go this route in the end and hence have a package pointlessly marked for removal which isn't questioned later on. We check if we are allowed to remove that package to avoid working on the positive dependencies if not, but we mark them for removal only after all the other dependencies are successfully resolved. In an ideal world we would let the problemResolver do its job on them, but the resolver might decide against doing the removal exploring another option like the next alternative, which might be a good idea, but is not the behaviour we had before, so that is the best we can do for now without changing the resolver drastically.
2020-06-29Add basic support for the Protected fieldJulian Andres Klode
This will be mapped to Important for the time being.
2020-06-14Deduplicate EDSP Provides line of M-A:foreign packagesDavid Kalnischkies
M-A:foreign causes Provides to apply to all architectures and as we wanted to avoid resolver changes for M-A those are done by explicitly creating these provides instead of forcing the resolvers to learn about this. The EDSP is a different beast though & we don't need this trick here especially as it leads to needless (but harmless) duplication. No sort+unique is done to avoid changing order (not that it should matter, but just to be sure), but the sets should be small enough to not make a huge difference either way.
2020-06-14Tell EDSP solvers about all installed pkgs ignoring archDavid Kalnischkies
We usually tell EDSP solvers only about architectures we are configured to treat as native/foreign, but the system could have packages from other architectures installed (even if very unlikely) which could influence the solution (e.g. requiring a removal) so we make sure to tell them.
2020-06-14Do not sent our filename-provides trick to EDSP solversDavid Kalnischkies
If package is installed via an explicitly given deb file we store the filename as a provides, so that the frontend can request the filename and get the usual "Selected foo instead of foo.deb" message. We do not need to trouble the EDSP solvers with that though as these provides are not valid in various ways and we have already solved the link between commandline and package (and version) for them. Closes: #962741
2020-06-08Support marking all newly installed packages as automatically installedNicolas Schier
Add option '--mark-auto' to 'apt install' that marks all newly installed packages as automatically installed. Signed-off-by: Nicolas Schier <nicolas@fjasle.eu>
2020-06-06Do not hardcode (wrong) group and mode in setup warningDavid Kalnischkies
Partial directories are created with 0700, but the parent is 0755, while the error message would report 0700 for both… that isn't right and can be pretty confusing. Turns out that the messages aren't marked for translation, so no unfuzzing is required & we just leave it as untranslated for now. Especially as the more detailed error strings derived from errno are translated. Reported-By: Wakko Warner <wakko@animx.eu.org> Closes: #962310
2020-06-03Deal with duplicates in the solution space of a depDavid Kalnischkies
While we process the possible solutions we might modify other solutions like discarding their candidates and such, so that then we reach them they might no longer be proper candidates. We also try to drop duplicates early on to avoid the simple cases of these which test-explore-or-groups-in-markinstall triggers via its explicit duplication but could also come via multiple provides. It only worked previously as were ignoring current versions which usually is okay expect if they are marked for removal and we want to reinstate them so the ProblemResolver can decide which one later on.
2020-06-03Allow 20 instead of 10 loops for pkgProblemResolverDavid Kalnischkies
Especially if a lot packages have to be removed due to not to explicitly expressed conflicts the problem resolver can take a few turns to remove them all. Allowing it to try a little longer if needed seems beneficial as the worst which can happen is that we now take two times as long to present an error message to the user.
2020-06-02Consider if a fix is successful before claiming it isDavid Kalnischkies
For protected packages the "Fixing" done via KillList in the ProblemResolver will usually not happen as the state change is not allowed, so the debug message is just confusing and the resolver is needlessly looping here (which might push it over the edge), so if we didn't do our thing successfully here we short-circuit a bit to help the next iteration come to a solution.