Age | Commit message (Collapse) | Author |
|
Instead of iterating character by character, use memchr() calls to
hopefully speed up the search. A newline is the most likely culprit, so
search for that first followed by a NULL byte if there was no newline in
the buffer.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
If you need zero-filled allocations, call CALLOC() instead.
This was from the original definition of these macros in commit
cc754bc6e3be0f3; hopefully our code is in the shape it needs to be to
switch this behavior.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
These backup-related paths in package extraction are used on relatively
few files during the install process, so bump them off the stack and
into the heap. This removes the artificial PATH_MAX limitation on their
length as well.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This moves the repetitive (and highly unlikely) logging work to a
single location.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This will always be a 64-bit signed integer rather than the variable length
time_t type. Dates beyond 2038 should be fully supported in the library; the
frontend still lags behind because 32-bit platforms provide no localtime64()
or equivalent function to convert from an epoch value to a broken down time
structure.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This prepares the function to handle values past year 2038. The return type
is still limited to 32-bits on 32-bit systems; this will be adjusted in a
future patch.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
We have a few incomplete translations, but these should be addressable
before the 4.0.1 maint release that is surely not that far in the
future.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
In prep for the 4.0.0 release.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Similar to what we did in edd9ed6a, disconnect the relationship with our
stack allocated error buffer from the curl handle. Just as an FTP
connection might have some network chatter on teardown causing the
progress callback to be triggered, we might also hit an error condition
that causes curl to write to our (now out of scope) error buffer.
I'm unable to reproduce FS#26327, but I have a suspicion that this
should fix it.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This was a bad oversight on my part, pointed out by Jakob. Whoops.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Another place where we were doing the dirty work by hand.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Expose the current static get_pkgpath() function internally to the rest
of the library as _alpm_local_db_pkgpath(). This allows use of this
convenience function in add.c and remove.c when forming the path to the
scriptlet location.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Add an is_archive parameter to reduce the amount of black magic going
on. Rework to use fewer PATH_MAX sized local variables, and simplify
some of the logic where appropriate in both this function and in the
callers where duplicate calls can be replaced by some conditional
parameter code.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This is a poor place for it, and it will likely move again in the
future, but it's better to have it here than as a static variable.
Initialization of this variable is now no longer necessary as its
zeroed on creation of the payload struct.
Signed-off-by: Dave Reisner <dreisner@archlinux.org>
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This was done to squash a memory leak in the sync database download
code. When we downloaded a database and then reused the payload struct,
we could find ourselves calling get_fullpath() for the signatures and
overwriting non-freed values we had left over from the database
download.
Refactor the payload_free function into a payload_reset function that we
can call that does NOT free the payload itself, so we can reuse payload
structs. This also allows us to move the payload to the stack in some
call paths, relieving us of the need to alloc space.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Rather than always initializing it on any handle creation. There are
several frontend operations (search, info, etc.) that never need the
download code, so spending time initializing this every single time is a
bit silly. This makes it a bit more like the GPGME code init path.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Rather than free them right away, keep the list on the transaction as
we already do with add and remove lists. This is necessary because we
may be manipulating pointers the frontend needs to refer to packages,
and we are breaking our contract as stated in the alpm_add_pkg()
documentation of only freeing packages at the end of a transaction.
This fixes an issue found when refactoring the package list display
code.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This is definitely not in the normal hot path, so we can afford to do
some temporary heap allocation here.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
No wonder these were slower than expected. We were only reading 4
(32-bit) or 8 (64-bit) bytes at a time and feeding it to the hash
functions. Define a buffer size constant and use it correctly so we feed
8K at a time into the hashing algorithm.
This cut one larger `-Sw --noconfirm` operation, with nothing to
actually download so only timing integrity, from 3.3s to 1.7s.
This has been broken since the original commit eba521913d6 introducing
OpenSSL usage for crypto hash functions. Boy do I feel stupid.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
In the sync code, we explicitly allocated a string for this field, while
in the dload code itself it was filled in with a pointer to another
string. This led to a memory leak in the sync download case.
Make remote_name non-const and always explicitly allocate it. This patch
ensures this as well as uses malloc + snprintf (rather than calloc) in
several codepaths, and eliminates the only use of PATH_MAX in the
download code.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This commit was made with the intent of displaying "correctly" sorted
package lists to users. Here are some reasons I think this is incorrect:
* It is done in the wrong place. If a frontend application wants to show
a different order of packages dependent on locale, it should do that
on its own.
* Even if one wants a locale-specific order, almost all package names
are all ASCII and language agnostic, so this different comparison
makes little sense and may serve only to confuse people.
* _alpm_pkg_cmp was unlike any other comparator function. None of the
rest had any dependency on anything but the content of the structs
being compared (e.g., they only used strcmp() or other basic
comparison operators).
This reverts commit 3e4d2c3aa65416487939148828afb385de2ee146.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
There was only one simple to handle case where we left a field
uninitialized; set it to NULL and use malloc() instead.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This saves a lot of unnecessary work since we don't need any of the
other fields in the stat struct.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
In every case we were calling calloc, the struct we allocated (or the
memory to be used) is fully specified later in the method.
For alpm_list_t allocations, we always set all of data, next, and prev.
For list copying and transforming to an array, we always copy the entire
data element, so no need to zero it first.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
I'm really good at breaking this on a regular basis. If only we had some
sort of automated testing for this...
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This is just a wrapper function; the real function we call logs an
almost identical line.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
In the sync download code, we added an early check in 6731d0a9407e for
sync download server existence so we wouldn't show the same error over
and over for each file to be downloaded. Move this check into the
download block so we only run it if there are actually files that need
to be downloaded for this repository.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
When we switched to a file object and not just a simple string, we missed an
update along the way here in target-target conflicts. This patch looks
large, but it really comes down to one errant (char *) cast before that has
been reworked to explicitly point to the alpm_file_t object. The rest is
simply code cleanup.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
A few parameters were outdated or wrongly named, and a few things were
explicitly linked that Doxygen wasn't able to resolve.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
We returned the right error code but never set the flags accordingly.
Also, now that we can bail early, ensure we set the error code.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This adds calls to gpgme_op_import_result() which we were not looking at
before to ensure the key was actually imported. Additionally, we do some
preemptive checks to ensure the keyring is even writable if we are going
to prompt the user to add things to it.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This also fixes a segfault found by dave when key_search is
unsuccessful; the key_search return code documentation has also been
updated to reflect reality.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
We've had a bit of churn since the last time this was done.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Pick up any updates before I push new source messages out to the
service.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Because we aren't using gpgv and a dedicated keyring that is known to be
all safe, we should honor this flag being set on a given key in the
keyring to know to not honor it. This prevents a key from being
reimported that a user does not want to be used- instead of deleting,
one should mark it as disabled.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Add two new static methods, key_search() and key_import(), to our
growing list of signing code.
If we come across a key we do not have, attempt to look it up remotely
and ask the user if they wish to import said key. If they do, flag the
validation process as a potential 'retry', meaning it might succeed the
next time it is ran.
These depend on you having a 'keyserver hkp://foo.example.com' line in
your gpg.conf file in your gnupg home directory to function.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This moves the result processing out of the validation check loop itself
and into a new loop. Errors will be presented to the user one-by-one
after we fully complete the validation loop, so they no longer overlap
the progress bar.
Unlike the database validation, we may have several errors to process in
sequence here, so we use a function-scoped struct to track all the
necessary information between seeing an error and asking the user about
it.
The older prompt_to_delete() callback logic is still kept, but only for
checksum failures. It is debatable whether we should do this at all or
just delegate said actions to the user.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This is for eventual use by the PGP key import code. Breaking this into
a separate commit now makes the following patches a bit easier to
understand.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This allows a frontend program to query, at runtime, what the library
supports. This can be useful for sanity checking during config-
requiring a downloader or disallowing signature settings, for example.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This takes the libraries hidden default out of the equation: hidden in
the sense that we can't even find out what it is until we create a
handle. This is a chicken-and-egg problem where we have probably already
parsed the config, so it is hard to get the bitmask value right.
Move it to the frontend so the caller can do whatever the heck they
want. This also exposes a shortcoming where the frontend doesn't know if
the library even supports signatures, so we should probably add a
alpm_capabilities() method which exposes things like HAS_DOWNLOADER,
HAS_SIGNATURES, etc.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This allows us to do all delta verification up front, followed by
whatever needs to be done with any found errors. In this case, we call
prompt_to_delete() for each error.
Add back the missing EVENT(ALPM_EVENT_DELTA_INTEGRITY_DONE) that
accidentally got removed in commit 062c391919e93f1d6.
Remove use of *data; we never even look at the stuff in this array for
the error code we were returning and this would be much better handled
by one callback per error anyway, or at least some strongly typed return
values.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
If siglist->results wasn't a NULL pointer, we would try to free it
anyway, even if siglist->count was zero. Only attempt to free this
pointer if we had results and the pointer is valid.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This one can be overwhelming when reading debug output from a very large
package. We already have the output of each extracted file so we
probably can do without this in 99.9% of cases.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This adds two new static methods, check_validity() and load_packages(),
to sync.c which are simply code fragments pulled out of our
do-everything sync commit code.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
In reality, there is no retrying that happens as of now because we don't
have any import or changing of the keyring going on, but the code is set
up so we can drop this in our new _alpm_process_siglist() function. Wire
up the basics to the sync database validation code, so we see something
like the following:
$ pacman -Ss unknowntrust
error: core: signature from "Dan McGee <dpmcgee@gmail.com>" is unknown trust
error: core: signature from "Dan McGee <dpmcgee@gmail.com>" is unknown trust
error: database 'core' is not valid (invalid or corrupted database (PGP signature))
$ pacman -Ss missingsig
error: core: missing required signature
error: core: missing required signature
error: database 'core' is not valid (invalid or corrupted database (PGP signature))
Yes, there is some double output, but this should be fixable in the
future.
Signed-off-by: Dan McGee <dan@archlinux.org>
|
|
This will make its way up the call chain eventually to allow trusting
and importing of keys as necessary.
Signed-off-by: Dan McGee <dan@archlinux.org>
|