Age | Commit message (Collapse) | Author |
|
This seems to be needed for the hebrew translations.
Gbp-Dch: ignore
|
|
Use the witness/byproducts approach to build the translations. A
byproduct of a command is like an output, but may be older than the
input.
Here, we generate a normal template with headers in the normal way
as a witness (and for Launchpad translations), but we also generate
a .pot-tmp0 template file without a header that gets copied to a
.pot-tmp byproduct only if it changed. This way, the .pot-tmp is
only updated if an actual string translation changed. We also
create a custom target for the .pot file that we'll depend on
later in the overall target creating the mo files to ensure that
the template is build before we try to build mo files.
Then we make the msgmerge depend on the .pot-tmp instead of the .pot
file, which means that msgmerge and msgfmt only get re-run if a string
change occured.
Gbp-Dch: ignore
|
|
This is really useful stuff to have.
Gbp-Dch: ignore
|
|
Merge all the per-domain templates into one template file using
msgcomm, stripping any line numbers in the input files, and sorting
the output per file.
This should create reasonably stable .pot and .po files that do not
change just because files move around. It should also be resilient
against some line changes, as long as one translated line is not
moved before/after another translated line.
Gbp-Dch: ignore
|
|
Rework the arguments to apt_add_translation_domain so a user
can specify TARGETS and SCRIPTS, the latter being Shell scripts.
For each language (TARGETS being C++, SCRIPTS being Shell), a separate
template is generated via xgettext. Those templates are then merged
together by using msgcomm. In case there are no Shell scripts in
the translation domain, msgcomm will receive /dev/null instead of
a shell translation template.
This also reintroduces line numbers, as msgcomm would otherwise
re-order the merged files not only by filename, but also by message
string. It's unclear why it does that, it could just leave strings
within a file alone.
In contrast to the old build system, we use xgettext for shell scripts
instead of bash --dump-strings, as it's just easier to use the same
tool for everything. We also create valid headers.
|
|
This makes debugging things easier.
Gbp-Dch: ignore
|
|
This gets rid of the line numbers, adds the plural keyword,
and makes msgfmt print statistics, so we know how well translated
we are.
Gbp-Dch: ignore
|
|
I wondered why the template was not rebuilt after I changed a file,
now I have the answer.
Gbp-Dch: ignore
|
|
This is needed in a lot of places. Also adjust config.h.in to use
it instead of the bare email address.
Gbp-Dch: ignore
|
|
I forgot this one, sorry
|
|
First of all, instead of creating the files at configure time,
generate the files using normal target. This has the huge advantage
that they are rebuilt if their input changes. While we are at it,
also add dependencies on the vendor entity files.
This also fixes the path to the vendor script, which was given
relatively before, which obviously won't work when running from
inside a deeper subdirectory.
To speed things up, pass the --vendor option to getinfo, so we
do not have to find out the current vendor in getinfo all over
again.
Gbp-Dch: ignore
|
|
Cache the current vendor, so we do not have to rerun getinfo when
reconfiguring stuff. This also has the nice effect of making the
vendor configurable, so you can manually specify it on a platform
that might not have dpkg (not that building without dpkg works
yet).
Gbp-Dch: ignore
|
|
This can be used to query a field for a specific vendor. It
also speeds up things a lot if we can cache the current vendor
in cmake and pass it to further getinfo invocations.
Gbp-Dch: ignore
|
|
No need to involve pkg-config when CMake has builtin support
Gbp-Dch: ignore
|
|
|
|
|
|
|
|
Look in build/apt-pkg and build/apt-inst instead of build/bin.
Gbp-Dch: ignore
|
|
This new packaging is much easier to read, although the duplication
in the install files is a bit annoying. We should probably also get
rid of the movefiles for solvers, planners, and https method; but
then we have to keep track of which methods exist in the apt package.
Another disadvantage is that building only the documentation packages
also requires building the code, as there's no way to turn off code
building in the project.
|
|
This early support seems a bit hacky, but it's a hard switch: The
integration tests do not understand the old build system anymore
afterwards. I don't really like that.
|
|
Build HTML docbook guides (untranslated) and manual pages
(including translations). Also install the examples in the
example subdirectory.
Translation of docbook guides has not been implemented yet,
but should be easy to do. The code also needs some cleanup
to automatically detect the available translations.
|
|
Introduce support for building translation domain-specific
templates, merging them with the translations, and building
a language-specific .mo file.
The invocation of xgettext is done in the project source
directory, not in the current source directory, and all paths
are made relative to the project root, in order to have clean
templates.
This only supports the C++ source code for now, it unfortunately
does not handle the shell scripts of deselect yet.
|
|
Introduce an initial CMake buildsystem. This build system can build
a fully working apt system without translation or documentation.
The FindBerkelyDB module is from kdelibs, with some small adjustements
to also look in db5 directories.
Initial work on this CMake build system started in 2009, and was
resumed in August 2016.
|
|
With this change, the 'library' command looks for a library libX
in the directories: build/bin, */X, X.
This allows it to find the library when building with the
upcoming CMake backend, which places the libraries in a sub
directory of the build tree with the same name as the source
tree.
For example, if building in 'build/', the apt-pkg library
will be available at 'build/apt-pkg/libapt-pkg.so.5.0'.
In case there are multiple instances of a library,
the newest one will be used.
Gbp-Dch: ignore
|
|
This simplifies the design a bit, as we do not need to read the
major ABI version number from some file / command.
Gbp-Dch: ignore
|
|
This makes it easier to write a generic subsitution tool for
handling substitutions in apt-key.in and sources.list.in
Gbp-Dch: ignore
|
|
Introduce the 'current' command to eventually replace the current
symbolic link. The current command does roughly the same as the
makefile, the code has just been cleaned up a bit to work better
as a shell function.
Gbp-Dch: ignore
|
|
Commit b559d4846018c3adac362c6f1d0d697956586208 updated the
documentation to refer to apt.systemd.daily instead of the
apt cron job, introducing fuzzy strings in all the translations.
Gbp-Dch: ignore
|
|
This fixes a test failures in the cmake branch which contains
sub directories in the methods output dir.
|
|
AC_CHECK_FUNCS() defines HAVE_* variables, but AC_CHECK_FUNC()
does not.
Anyway: We do not have any code using HAVE_PTSNAME_R, so
just remove it.
|
|
This was disabled in 1999 by jgg due to "glibc bugs". Let's hope
those are fixed now, 17 years later.
|
|
doc: update path to periodic options script
|
|
|
|
While autotools defines all macros to 1 explicitly, CMake only
defines them without a value. In such a case, #if fails with an
error and #ifdef works.
In preparation for a possible switch to CMake and to clean up
the code (rest uses #ifdef), use #ifdef here
|
|
|
|
Create a temporary configuration file with a dump of our
configuration and pass that to apt-key.
LP: #1607283
|
|
|
|
Create a local exiter object which cleans up files on exit.
|
|
Previously, when data could be created and sig not, we would unlink
sig, not data (and vice versa).
|
|
gpgconf wasn't always equipped with a --kill option as highlighted by
our testcases failing on Travis and co as these use a much older version
of gpg2. As this is just for cleaning up slightly faster we ignore any
error a call might produce and carry on. Use a recent enough gpg2
version if you need the immediate killing…
Gbp-Dch: Ignore
Reported-By: Travis CI
|
|
apt-key has (usually) no secret key material so it doesn't really need
the agent at all, but newer gpgs insist on starting it anyhow. The
agents die off rather quickly after the underlying home-directory is
cleaned up, but that is still not fast enough for tools like sbuild
which want to unmount but can't as the agent is still hanging onto a
non-existent homedir.
Reported-By: Johannes 'josch' Schauer on IRC
|
|
Followup of b58e2c7c56b1416a343e81f9f80cb1f02c128e25.
Still a regression of sorts of 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf.
Closes: 832044
|
|
If a solver/planner exits before apt is done writing we will generate
write errors. Solvers like 'dump' can be pretty quick in failing but
produce a valid EDSP error report apt should read, parse and display
instead of just discarding even through we had write errors.
|
|
There is no point in trying to perform Write/Read on a FileFd which
already failed as they aren't going to work as expected, so we should
make sure that they fail early on and hard.
|
|
Reported-By: cppcheck
Gbp-Dch: Ignore
|
|
Simulations are frequently run by unprivileged users which naturally
don't have the permissions to write to the default location for the eipp
file. Either way is bad as running in simulation mode doesn't mean we
don't want to run the logging (as EIPP runs the same regardless of
simulation or 'real' run), but showing the warnings is relatively
pointless in the default setup, so, in case we would produce errors and
perform a simulation we will discard the warnings and carry on.
Running apt with an external planner wouldn't have generated these
messages btw.
Closes: 832614
|
|
If another file in the transaction fails and hence dooms the transaction
we can end in a situation in which a -patched file (= rred writes the
result of the patching to it) remains in the partial/ directory.
The next apt call will perform the rred patching again and write its
result again to the -patched file, but instead of starting with an empty
file as intended it will override the content previously in the file
which has the same result if the new content happens to be longer than
the old content, but if it isn't parts of the old content remain in the
file which will pass verification as the new content written to it
matches the hashes and if the entire transaction passes the file will be
moved the lists/ directory where it might or might not trigger errors
depending on if the old content which remained forms a valid file
together with the new content.
This has no real security implications as no untrusted data is involved:
The old content consists of a base file which passed verification and a
bunch of patches which all passed multiple verifications as well, so the
old content isn't controllable by an attacker and the new one isn't
either (as the new content alone passes verification). So the best an
attacker can do is letting the user run into the same issue as in the
report.
Closes: #831762
|
|
The rewrite in 742f67eaede80d2f9b3631d8697ebd63b8f95427 is based on the
assumption that the pipeline will always be at least one item short each
time it is called, but the logs in #832113 suggest that this isn't
always the case. I fail to see how at the moment, but the old
implementation had this behavior, so restoring it can't really hurt, can
it?
|
|
Also fixes message itself to mention the correct option name as noticed
in #832113.
|
|
We read the entire input file we want to patch anyhow, so we can also
calculate the hash for that file and compare it with what he had
expected it to be.
Note that this isn't really a security improvement as a) the file we
patch is trusted & b) if the input is incorrect, the result will hardly be
matching, so this is just for failing slightly earlier with a more
relevant error message (althrough, in terms of rred its ignored and
complete download attempt instead).
|