Age | Commit message (Collapse) | Author |
|
For printf in C, width is counted as bytes rather than Unicode width. [1]
> If the precision is specified, no more than that many bytes are written.
[1] Section 7.21.6, N2176, final draft for ISO/IEC 9899:2017 (C18)
Thanks Andrew Gregory for suggesting a simpler approach.
Fixes FS#59229
Signed-off-by: Chih-Hsuan Yen <yan12125@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
In case if a package corrupted (e.g. signature or hash is invalid)
pacman tries to remove the package file to redownload it anew the next time.
Remove *.sig file as well to make sure no data is left for the invalid
package.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Signed-off-by: Ronan Pigott <rpigott@berkeley.edu>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
We forgot to remove m4/ in commit 454ea024383eab60295e4c4fdf2c329475887b2c
and now it's tragically reminding me of autotools!
Also take this opportunity to drop some symlinks in lib/libalpm/ for
libcommon source files. In autotools these were built specifically for
libalpm and needed to be available in that directory, but the meson
setup just has libalpm depend on libcommon. So these pseudo source files
aren't needed anymore.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
FS#61661 notes that we have a comment "Defaults" value for BUILDENV and OPTIONS
but that does not necessarily correspond to what the example makepkg.conf sets.
Change the comment to "Makepkg defaults" to indicate this is what makepkg will
do unless told otherwise.
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Pacman has multiple ways to verify package content integrity:
- gpg signature
- sha256
- md5
These verification mechanisms overlap each other. gpg signatures already contain
hash value of the package content. So if a package signature is present then
pacman ignored the other 2 hash values. This worked well with signtures
embedded into pacman database.
Recently pacman got an ability to handle detached signatures (*.sig files
located next to the package files). If pacman verifies detached signature only
then one can replace pkg+sig files with some other content and pacman still
processes it as a valid package. To prevent it we need to verify
database<->package integrity using hash values stored in the database.
This commit fixes FS#67232
The new debug output is:
checking package integrity...
debug: found cached pkg: /var/cache/pacman/pkg/ruby-2.7.1-2-x86_64.pkg.tar.zst
debug: sha256sum: 77baf61c62c5570b3a37cf0c3b16c5d9a97dde6fedd1a3528bf0cc5f96dd5e52
debug: checking sha256sum for /var/cache/pacman/pkg/ruby-2.7.1-2-x86_64.pkg.tar.zst
debug: sig data: <from .sig>
debug: checking signature for /var/cache/pacman/pkg/ruby-2.7.1-2-x86_64.pkg.tar.zst
debug: 1 signatures returned
debug: fingerprint: B5971F2C5C10A9A08C60030F786C63F330D7CB92
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
With current master version the 'keyring checking' step produces an error:
debug: returning error 6 from alpm_pkg_get_sig (../lib/libalpm/package.c: 274) : wrong or NULL argument passed
The package signature is still checked later at the integrity verification step though.
This commit fixes keyring checking and now the debug log looks like this:
debug: found cached pkg: /var/cache/pacman/pkg/ruby-2.7.1-2-x86_64.pkg.tar.zst
debug: found detached signature /var/cache/pacman/pkg/ruby-2.7.1-2-x86_64.pkg.tar.zst.sig with size 566
debug: found signature key: 786C63F330D7CB92
debug: looking up key 786C63F330D7CB92 locally
debug: key lookup success, key exists
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Currently the list of supported formats for an archive, is maintained in
two places. And repo-add does not actually get updated. :(
In the process, remove some of the logical duplication when calling
bsdtar/compress_as.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
get_compression_command() can now be used to do upfront checks for
whether a given extension is known to do something successfully. This is
useful when writing tools in which an unknown compression type is a
fatal error.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Signed-off-by: Morten Linderud <morten@linderud.pw>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
In some cases (when trust_remote_name is used for a URL without a filename and
no Content-Disposition is provided by the server) destfile_name will be
NULL. In this case payload data will be stored in tempfile_name and no
destfile_name is set.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
At the end of payload use it calls _alpm_dload_payload_reset()
that will free() these and other fields anyway.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
The main payload final name might be affected by url redirects or
Content-Disposition HTTP header value.
We want to make sure that accompanion *.sig filename always matches the
package filename. So ignore finalname/Content-Disposition for the *.sig file.
It also helps to fix a corner case when the download URL does not contain
a filename and server provides Content-Disposition for the main payload
but not for the signature payload.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Pacman has a 'key in keyring' verification step that makes sure the signatures
have a valid keyid. Currently pacman parses embedded package signatures only.
Add a fallback to detached signatures. If embedded signature is missing then it
tries to read corresponding *.sig file and get keyid from there.
Verification:
debug: found cached pkg: /var/cache/pacman/pkg/glib-networking-2.64.3-1-x86_64.pkg.tar.zst
debug: found detached signature /var/cache/pacman/pkg/glib-networking-2.64.3-1-x86_64.pkg.tar.zst.sig with size 310
debug: found signature key: A5E9288C4FA415FA
debug: looking up key A5E9288C4FA415FA locally
debug: key lookup success, key exists
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
In case if *.pkg exists but *.sig file does not we still have to pass
the pkg to multi_download API.
To avoid redownloading *.pkg file we use CURLOPT_TIMECONDITION curl option.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
It is similar to _alpm_filecache_find() but does not return a
dynamically allocated memory to user. Thus the user does not need to
free this resource.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Current code uses an incrementing counter to check whether a function
returned error:
errors += some_function();
if(errors) { goto finish }
Replace with a more standard variable
errors = some_function();
if(errors) { goto finish }
Rename 'errors' variable to a more typical 'ret'.
Avoid reporting both ALPM_EVENT_PKG_RETRIEVE_FAILED and
ALPM_EVENT_PKG_RETRIEVE_DONE in the error path.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Until now callee of ALPM download functionality has been in charge of
payload creation both for the main file (e.g. *.pkg) and for the accompanied
*.sig file. One advantage of such solution is that all payloads are
independent and can be fetched in parallel thus exploiting the maximum
level of download parallelism.
To build *.sig file url we've been using a simple string concatenation:
$requested_url + ".sig". Unfortunately there are cases when it does not
work. For example an archlinux.org "Download From Mirror" link looks like
this https://www.archlinux.org/packages/core/x86_64/bash/download/ and
it gets redirected to some mirror. But if we append ".sig" to the end of
the link url and try to download it then archlinux.org returns 404 error.
To overcome this issue we need to follow redirects for the main payload
first, find the final url and only then append '.sig' suffix.
This implies 2 things:
- the signature payload initialization need to be moved to dload.c
as it is the place where we have access to the resolved url
- *.sig is downloaded serially with the main payload and this reduces
level of parallelism
Move *.sig payload creation to dload.c. Once the main payload is fetched
successfully we check if the callee asked to download the accompanied
signature. If yes - create a new payload and add it to mcurl.
*.sig payload does not use server list of the main payload and thus does
not support mirror failover. *.sig file comes from the same server as
the main payload.
Refactor event loop in curl_multi_download_internal() a bit. Instead of
relying on curl_multi_check_finished_download() to return number of new
payloads we simply rerun the loop iteration one more time to check if
there are any active downloads left.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
When a .SRCINFO file is generated via `makepkg --printsrcinfo`, each
section is concluded with an empty line. This means that at the end of
the file, an empty line remains. This is considered a trailing
whitespace error. In fact, `git diff --check` will warn about this,
saying "new blank line at EOF."
Instead of closing each section off with an empty line, use the empty
line to separate sections, omitting the empty line at the end of the
file.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
All users of _alpm_download() have been refactored to the new API.
It is time to remove the old _alpm_download() functionality now.
This change also removes obsolete SIGPIPE signal handler functionality
(this is a leftover from libfetch days).
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
|
|
Installing remote packages using its URL is an interesting case for ALPM
API. Unlike package sync ('pacman -S pkg1 pkg2') '-U' does not deal with
server mirror list. Thus _alpm_multi_download() should be able to
handle file download for payloads that either have 'fileurl' field
or pair of fields ('servers' and 'filepath') set.
Signature for alpm_fetch_pkgurl() has changed and it accepts an
output list that is populated with filepaths to fetched packages.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
|
|
Fixes FS#67000
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
RSA2048 may have been fine when this was written many moons ago, but time
this has a bump.
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
If it's not listed by --list-secret-key we don't care if it has been
imported into your keyring, it's unusable. And you might not have a
private key at all in the no-keyid-specified case.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
We pass this to gpg -u and this gpg option can accept a number of
different formats, not just the historical hexadecimal fingerprint we
assumed. We should not barf hard if a format is used which happens to
contain spaces.
This also fixes a validation bug. When we initially check if the desired
key is available, we don't quote spaces, so gpg goes ahead and treats
each space-separated string as a *different key* to search for,
returning partial matches, and returning success if at least one key is
found. But gpg --detach-sign -u will certainly not accept multiple keys!
Fixes FS#66949
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
In commit 882e707e40bbade0111cf3bdedbdac4d4b70453b we changed message
output to go to stdout by default, unless it was an error. The plain()
function doesn't *look* like an error function, but in practice it was
-- it's used to continue multiline messages, and all in-tree uses were
for warning/error.
This is a problem both because we're sending output to the wrong place,
and because in some cases, we were performing error logging from a
function which would otherwise return a value to be captured in a
variable using command substution.
Fix this and straighten out the API by providing two functions: one for
continuing msg output, and one which wraps this by sending output to
stderr, for continuing error output.
Change all callers to use the second function.
|
|
This was broken in commit 882e707e40bbade0111cf3bdedbdac4d4b70453b,
which changed 'plain()' messages to go to stdout, which was then
captured as the download client in question: cmdline=("Aborting...").
The result was a very confusing error message e.g.
/usr/share/makepkg/source/file.sh: line 72: $'\E[1m': command not found
or with makepkg --nocolor:
/usr/share/makepkg/source/file.sh: line 72: Aborting...: command not found
The problem here is that we checked to see if an asynchronous subshell,
in our case <(...), failed, by checking if its captured stdout is
non-empty. Which is terrible, and also a limitation of old bash. But
bash 4.4 can use wait $! to retrieve the return value of an asynchronous
subshell. Now we target that as our minimum, we can sanely handle errors
in such functions.
Losing error messages on stdout by capturing them in a variable instead
of printing them, continues to be a problem, but this will be fixed
systematically in a later commit.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
If something like source=(..."#commit=") is used, e.g. due to failed
variable expansion, we try to check out an empty refspec as nothing at
all, and end up just running "git checkout". This happens because we
fail at variable expansion too -- so let's quote our variables properly
and make sure git sees this as an empty refspec, so it can error out.
Also make sure it is interpreted as a ref instead of a path.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
In order to use gettext on systems where it is not part of libc, the
correct linker flags are needed in libalpm.pc (for static compilation).
This has never been the case.
The new meson build system currently only checks for ngettext in libc,
but does not fall back to searching for the existence of -lintl; add it
to the libalpm dependencies.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Results in ~40s saved in each job.
Signed-off-by: Filipe Laíns <lains@archlinux.org>
|
|
This removed all information on dependency failures if the --syncdeps
flag was not used. A better approach is needed.
This reverts commit 4246a4cc4f0f87642cbbb6b375524b2e4c713412.
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Given RFC 4880 provides the code to do this calculation, I am not sure
how I managed to stuff that up! This bug was only exposed when a
signature made with "include-key-block" was added to the Arch repos,
which provided a subpacket with the required size to hit this issue.
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
When building with -DNDEBUG, assert statements are compiled out to
no-ops. Thus, we can't depend on assignments or other computations
occurring inside the assert().
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
It's either a waste of work, or triggers edge cases in some packages
(like coreutils-8.31) where the source file is readonly and cp gets a
permission denied error trying to overwrite it with an identical copy of
itself.
Also while we are at it, make the variable names be something readable,
because I could barely tell what this was doing while editing it.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
This removes support for autotools in favour of meson.
|
|
While iterating over the provides array, the find call for locating a
shared library may result in listing multiple entries which by itself
does not produce a stable deterministic order and may vary depending on
the underlying filesystem.
To provide a stable listing and a reproducible .PKGINFO file the result
of find is piped to sort with a static LC_ALL=C localisation.
Signed-off-by: Levente Polyak <anthraxx@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
This is not a warning, _parse_options() returns failure without even
parsing further lines and the attempted pacman/pacman-conf program
execution immediately aborts. Warnings are for when e.g. later on if we
don't recognize a setting at all, we skip over it and have enough
confidence in this to continue executing the program.
The current implementation results in pacman-conf aborting with:
warning: config file /etc/pacman.conf, line 60: invalid value for 'ParallelDownloads' : '2.5'
error parsing '/etc/pacman.conf'
or pacman -Syu aborting with the entirely more cryptic:
warning: config file /etc/pacman.conf, line 59: invalid value for 'ParallelDownloads' : '2.5'
and this isn't just a problem for the newly added ParallelDownloads
setting, either, you could get the same problem if you specified a
broken XferCommand, but that's harder as it's more accepting of input
and you probably don't hit this except with unbalanced quotes.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
This was only partially implemented in the original implementation.
`pacman-conf | grep ILoveCandy` would tell you if it was set, but
querying directly by name would not.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
This was forgotten in the initial implementation, so it was impossible
to figure out the value from a script, or correctly roundtrip the
config file.
Signed-off-by: Eli Schwartz <eschwartz@archlinux.org>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Now when all callers of the old alpm_db_update() function are gone we can
remove this implementation. And then rename alpm_dbs_update() function to
alpm_db_update().
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Create a list of dload_payloads and pass it to the new _alpm_multi_*
interface.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Multiplexed download requires ability to draw UI for multiple active progress
bars. To implement it we use ANSI codes to move cursor up/down and then
redraw the required progress bar.
`pacman_multibar_ui.active_downloads` field represents the list of active
downloads that correspond to progress bars.
`struct pacman_progress_bar` is a data structure for a progress bar.
In some cases (e.g. database downloads) we want to keep progress bars in order.
In some other cases (package downloads) we want to move completed items to the
top of the screen. Function `multibar_move_completed_up` allows to configure
such behavior.
Per discussion in the maillist we do not want to show download progress for
signature files.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
With the previous download interface the callback uses the first progress
event as 'download has started' signal. Unfortunately it does not work with
up-to-date files that never receive 'download progress' events.
Up-to-date database messages are currently handled in sync_syncdbs()
after the sequential download is completed and a result from ALPM is
received. But this is not going to work with multiplexed download
interface that returns the result only after all files are completed.
Another problem with 'first progress event is the beginning of the
download' is that such events time are unpredictable. Thus the UI progress
bar order might differ from what has been passed by client to
alpm_dbs_update() function. We actually want to keep the dbs progress bars
in a strict order.
To help to solve the given problems extend the download callback to
allow 2 more events - download started and completed. 'Download started'
events appear in the same order as in the list given by a client.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
Multiplexed database/files downloads will use multiple progress bars.
The UI logic is quite complicated and printing error messages while
handling multiple progress bars is going to be challenging.
Instead we are going to save all ALPM error messages to a list and flush
it at the end of the download process. Use on_progress variable that
blocks error messages printing.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
curl_multi_download_internal() is the main loop that creates up to
'ParallelDownloads' easy curl handles, adds them to mcurl and then
performs curl execution. This is when the paralled downloads happens.
Once any of the downloads complete the function checks its result.
In case if the download fails it initiates retry with the next server
from payload->servers list. At the download completion all the payload
resources are cleaned up.
curl_multi_check_finished_download() is essentially refactored version of
curl_download_internal() adopted for multi_curl. Once mcurl porting is
complete curl_download_internal() will be removed.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
It is an equivalent of _alpm_download but accepts a list of payloads.
curl_multi_download_internal() is a stub at this moment and will be
implemented in the later commits of this patch series.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
dload_payload->curlerr is a field that is used inside
curl_download_internal() function only. It can be converted to a local
variable.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
To be able to run multiple download in parallel efficiently we need to
use curl_multi interface [1]. It introduces a set of APIs over new type
of handler 'CURLM'.
Create CURLM object at the application start and set it to global ALPM
context.
The 'single-download' CURL handle moves to payload struct. A new CURL
handle is created for each payload with intention to be processed by CURLM.
Note that curl_download_internal() is not ported to CURLM interface due
to the fact that the function will go away soon.
[1] https://curl.haxx.se/libcurl/c/libcurl-multi.html
Signed-off-by: Allan McRae <allan@archlinux.org>
|
|
This is an equivalent of alpm_db_update but for multiplexed (parallel)
download. The difference is that this function accepts list of
databases to update. And then ALPM internals download it in parallel if
possible.
Add a stub for _alpm_multi_download the function that will do parallel
payloads downloads in the future.
Introduce dload_payload->filepath field that contains url path to the
file we download. It is like fileurl field but does not contain
protocol/server part. The rationale for having this field is that with
the curl multidownload the server retry logic is going to move to a curl
callback. And the callback needs to be able to reconstruct the 'next'
fileurl. One will be able to do it by getting the next server url from
'servers' list and then concat with filepath. Once the 'parallel download'
refactoring is over 'fileurl' field will go away.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Signed-off-by: Allan McRae <allan@archlinux.org>
|