* Optimizations and build speedup
With this commit I have changed several components to be more efficient.
This can be less llvm-lines generated or less `clone()` calls.
### Config
- Re-ordered the `make_config` macro to be more efficient
- Created a custom Deserializer for `ConfigBuilder` less code and more efficient
- Use struct's for the `prepare_json` function instead of generating a custom JSON object.
This generates less code and is more efficient.
- Updated the `get_support_string` function to handle the masking differently.
This generates less code and also was able to remove some sub-macro-calls
### Error
- Added an extra new call to prevent duplicate Strings in generated macro code.
This generated less llvm-lines and seems to be more efficient.
- Created a custom Serializer for `ApiError` and `CompactApiError`
This makes that struct smaller in size, so better for memory, but also less llvm-lines.
### General
- Removed `once_lock` and replace it all with Rust's std LazyLock
- Added and fixed some Clippy lints which reduced `clone()` calls for example.
- Updated build profiles for more efficiency
Also added a new profile specifically for CI, which should decrease the build check
- Updated several GitHub Workflows for better security and use the new `ci` build profile
- Updated to Rust v1.90.0 which uses a new linker `rust-lld` which should help in faster building
- Updated the Cargo.toml for all crates to better use the `workspace` variables
- Added a `typos` Workflow and Pre-Commit, which should help in detecting spell error's.
Also fixed a few found by it.
Signed-off-by: BlackDex <black.dex@gmail.com>
* Fix release profile
Signed-off-by: BlackDex <black.dex@gmail.com>
* Update typos and remove mimalloc check from pre-commit checks
Signed-off-by: BlackDex <black.dex@gmail.com>
* Misc fixes and updated typos
Signed-off-by: BlackDex <black.dex@gmail.com>
* Update crates and workflows
Signed-off-by: BlackDex <black.dex@gmail.com>
* Fix formating and pre-commit
Signed-off-by: BlackDex <black.dex@gmail.com>
* Update to Rust v1.91 and update crates
Signed-off-by: BlackDex <black.dex@gmail.com>
* Update web-vault to v2025.10.1 and xx to v1.8.0
Signed-off-by: BlackDex <black.dex@gmail.com>
---------
Signed-off-by: BlackDex <black.dex@gmail.com>
* Use Diesels MultiConnections Derive
With this PR we remove almost all custom macro's to create the multiple database type code. This is now handled by Diesel it self.
This removed the need of the following functions/macro's:
- `db_object!`
- `::to_db`
- `.from_db()`
It is also possible to just use one schema instead of multiple per type.
Also done:
- Refactored the SQLite backup function
- Some formatting of queries so every call is one a separate line, this looks a bit better
- Declare `conn` as mut inside each `db_run!` instead of having to declare it as `mut` in functions or calls
- Added an `ACTIVE_DB_TYPE` static which holds the currently active database type
- Removed `diesel_logger` crate and use Diesel's `set_default_instrumentation()`
If you want debug queries you can now simply change the log level of `vaultwarden::db::query_logger`
- Use PostgreSQL v17 in the Alpine images to match the Debian Trixie version
- Optimized the Workflows since `diesel_logger` isn't needed anymore
And on the extra plus-side, this lowers the compile-time and binary size too.
Signed-off-by: BlackDex <black.dex@gmail.com>
* Adjust query_logger and some other small items
Signed-off-by: BlackDex <black.dex@gmail.com>
* Remove macro, replaced with an function
Signed-off-by: BlackDex <black.dex@gmail.com>
* Implement custom connection manager
Signed-off-by: BlackDex <black.dex@gmail.com>
* Updated some crates to keep up2date
Signed-off-by: BlackDex <black.dex@gmail.com>
* Small adjustment
Signed-off-by: BlackDex <black.dex@gmail.com>
* crate updates
Signed-off-by: BlackDex <black.dex@gmail.com>
* Update crates
Signed-off-by: BlackDex <black.dex@gmail.com>
---------
Signed-off-by: BlackDex <black.dex@gmail.com>
* Abstract file access through Apache OpenDAL
* Add AWS S3 support via OpenDAL for data files
* PR improvements
* Additional PR improvements
* Config setting comments for local/remote data locations
- Updated web-vault to v2025.5.0
- Updated Rust to v1.87.0
- Updated all the crates
- Replaced yubico with yubico_ng
- Fixed several new (nightly) clippy lints
Signed-off-by: BlackDex <black.dex@gmail.com>
* rename membership
rename UserOrganization to Membership to clarify the relation
and prevent confusion whether something refers to a member(ship) or user
* use newtype pattern
* implement custom derive macro IdFromParam
* add UuidFromParam macro for UUIDs
* add macros to Docker build
Co-authored-by: dfunkt <dfunkt@users.noreply.github.com>
---------
Co-authored-by: dfunkt <dfunkt@users.noreply.github.com>
* Change API inputs/outputs and structs to camelCase
* Fix fields and password history
* Use convert_json_key_lcase_first
* Make sends lowercase
* Update admin and templates
* Update org revoke
* Fix sends expecting size to be a string on mobile
* Convert two-factor providers to string
As mentioned in #3111, using a very very large vault causes some issues.
Mainly because of a SQLite limit, but, it could also cause issue on
MariaDB/MySQL or PostgreSQL. It also uses a lot of memory, and memory
allocations.
This PR solves this by removing the need of all the cipher_uuid's just
to gather the correct attachments.
It will use the user_uuid and org_uuid's to get all attachments linked
to both, weither the user has access to them or not. This isn't an
issue, since the matching is done per cipher and the attachment data is
only returned if there is a matching cipher to where the user has access to.
I also modified some code to be able to use `::with_capacity(n)` where
possible. This prevents re-allocations if the `Vec` increases size,
which will happen a lot if there are a lot of ciphers.
According to my tests measuring the time it takes to sync, it seems to
have lowered the duration a bit more.
Fixes#3111
Improved sync speed by resolving the N+1 query issues.
Solves #1402 and Solves #1453
With this change there is just one query done to retreive all the
important data, and matching is done in-code/memory.
With a very large database the sync time went down about 3 times.
Also updated misc crates and Github Actions versions.
This is a rather large PR which updates the async branch to have all the
database methods as an async fn.
Some iter/map logic needed to be changed to a stream::iter().then(), but
besides that most changes were just adding async/await where needed.
This is a rather large PR which updates the async branch to have all the
database methods as an async fn.
Some iter/map logic needed to be changed to a stream::iter().then(), but
besides that most changes were just adding async/await where needed.
When using MariaDB v10.5+ Foreign-Key errors were popping up because of
some changes in that version. To mitigate this on MariaDB and other
MySQL forks those errors are now catched, and instead of a replace_into
an update will happen. I have tested this as thorough as possible with
MariaDB 10.5, 10.4, 10.3 and the default MySQL on Ubuntu Focal. And
tested it again using sqlite, all seems to be ok on all tables.
resolves#1081. resolves#1065, resolves#1050
Diesel requires the following changes:
- Separate connection and pool types per connection, the generate_connections! macro generates an enum with a variant per db type
- Separate migrations and schemas, these were always imported as one type depending on db feature, now they are all imported under different module names
- Separate model objects per connection, the db_object! macro generates one object for each connection with the diesel macros, a generic object, and methods to convert between the connection-specific and the generic ones
- Separate connection queries, the db_run! macro allows writing only one that gets compiled for all databases or multiple ones
PostgreSQL updates/inserts ignored None/null values.
This is nice for new entries, but not for updates.
Added derive option to allways add these none/null values for Option<>
variables.
This solves issue #965
This includes migrations as well as Dockerfile's for amd64.
The biggest change is that replace_into isn't supported by Diesel for the
PostgreSQL backend, instead requiring the use of on_conflict. This
unfortunately requires a branch for save() on all of the models currently
using replace_into.