committed by
GitHub
36 changed files with 3172 additions and 191 deletions
@ -0,0 +1,49 @@ |
|||
name: Build Lambda Package |
|||
|
|||
on: workflow_dispatch |
|||
|
|||
jobs: |
|||
build: |
|||
runs-on: ubuntu-24.04-arm |
|||
|
|||
container: |
|||
image: public.ecr.aws/codebuild/amazonlinux2-aarch64-standard:3.0 |
|||
|
|||
steps: |
|||
- name: Checkout source code |
|||
uses: actions/checkout@v4 |
|||
|
|||
- name: Install development packages |
|||
run: sudo yum install -y krb5-devel openldap-devel |
|||
|
|||
- name: Install Rust toolchain |
|||
uses: dtolnay/rust-toolchain@stable |
|||
|
|||
- name: Install Cargo Lambda |
|||
uses: jaxxstorm/action-install-gh-release@v1.9.0 |
|||
with: |
|||
repo: cargo-lambda/cargo-lambda |
|||
platform: linux |
|||
arch: aarch64 |
|||
|
|||
- name: Setup rust Cache |
|||
uses: Swatinem/rust-cache@v2 |
|||
|
|||
- name: Build with Cargo |
|||
run: cargo lambda build --verbose |
|||
|
|||
- name: Copy libpq and its dependencies |
|||
run: cp /lib64/{libcrypt.so.2,liblber-2.4.so.2,libldap_r-2.4.so.2,libpq.so.5,libsasl2.so.3} target/lambda/vaultwarden/ |
|||
|
|||
# This ensures passes the startup checks for the web-vault, which is |
|||
# instead served statically from an S3 Bucket |
|||
- name: Create placeholder web-vault/index.html |
|||
run: |- |
|||
mkdir target/lambda/vaultwarden/web-vault |
|||
echo "<html><body><h1>Web Vault Placeholder</h1></body></html>" > target/lambda/vaultwarden/web-vault/index.html |
|||
|
|||
- name: Archive function package |
|||
uses: actions/upload-artifact@v4 |
|||
with: |
|||
name: vaultwarden-lambda |
|||
path: target/lambda/vaultwarden/* |
File diff suppressed because it is too large
@ -0,0 +1,7 @@ |
|||
[build] |
|||
features = ["aws"] |
|||
release = true |
|||
arm64 = true |
|||
|
|||
[build.compiler] |
|||
type = "cargo" |
@ -0,0 +1,3 @@ |
|||
.aws-sam |
|||
vaultwarden-lambda.zip |
|||
web-vault |
@ -0,0 +1,53 @@ |
|||
# AWS Serverless Deployment Instructions |
|||
|
|||
## Architecture |
|||
``` |
|||
CloudFront CDN |
|||
├─ API Lambda Function |
|||
│ ├─ Data S3 Bucket |
|||
│ ├─ Aurora DSQL Database |
|||
│ └─ Amazon Simple Email Service (SES) |
|||
└─ Web-vault static assets S3 Bucket |
|||
``` |
|||
|
|||
## A Note On AWS Accounts and Security |
|||
It is common to have one AWS account host multiple services. But it's easy, and doesn't cost any additional amount, to separate workloads into their own accounts. Doing so makes it easier to control for security concerns and monitor costs. AWS Identity and Access Management (IAM) enforces additional controls for cross-account access than for within-account access, for example, making it harder for security attacks to hop from workload to workload when they are in separate accounts. |
|||
|
|||
Given the confidential nature of data stored in Vaultwarden, it is *highly* recommended that you create a new, separate AWS account just for Vaultwarden. If you only have one account, investigate creating an [AWS Organization](https://aws.amazon.com/organizations/) to make it easy to create a second account tied to the same billing and account management mechanism, and investigate creating an [AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/) instance for easy SSO access across your accounts. |
|||
|
|||
## Initial Deployment |
|||
1. Create an AWS account |
|||
1. Install the AWS CLI |
|||
1. Install AWS SAM CLI |
|||
1. Download the vaultwarden-lambda.zip Lambda Function code package (e.g. from POC GHA artifact from run https://github.com/txase/vaultwarden/actions/runs/13315966383) to this directory |
|||
1. Pick a region that supports DSQL to deploy the Vaultwarden application into (must be one of us-east-1 or us-east-2 during DSQL Preview) |
|||
1. Create an Aurora DSQL Cluster in the region using the AWS Console (this will be automated when CloudFormation ships DSQL support at GA) |
|||
1. Setup local AWS configuration to access account and region from CLI |
|||
1. Copy DSQL Cluster ID |
|||
1. Run `./deploy.sh` in this directory |
|||
* Most parameters can be skipped at first, but you must provide the `DSQLClusterID` parameter value. |
|||
1. Note the "Output" values from the deploy command |
|||
* These can also be retrieved later by running `sam list stack-outputs` |
|||
1. Download the latest [web-vault build](https://github.com/dani-garcia/bw_web_builds/releases) and extract it |
|||
1. Sync the web-vault build contents into the WebVaultAssetsBucket: |
|||
* Inside the web-vault build folder run `aws s3 sync . s3://<WebVaultAssetsBucket>`, where `WebVaultAssetsBucket` is a stack output value |
|||
1. You can now navigate to your instance at the location of your `CDNDomain` stack output value |
|||
|
|||
## Custom Domain |
|||
1. Create an AWS Certificate Manager (ACM) Certificate for your domain **in the us-east-1 region** |
|||
* There are many tutorials and/or automated ways to do this, including following the official docs [here](https://docs.aws.amazon.com/acm/latest/userguide/acm-public-certificates.html) |
|||
* It must be in the us-east-1 region because CloudFront only supports certificates from us-east-1 |
|||
* Use key algorithm RSA 2048 |
|||
* Continue to the next step once the certificate is in the *Issued* state |
|||
* Note the certificate's ARN |
|||
1. Run `./deploy.sh` again and add the following parameter values: |
|||
* **Domain**: `https://<custom domain>` |
|||
* **ACMCertificateArn**: The ARN of the certificate you created for the domain |
|||
1. Create a CNAME record for the custom domain set to the value of the CDNDomain stack output |
|||
|
|||
## Email via AWS Simple Email Service (SES) |
|||
Email is complicated. These instructions will not attempt to walk you through setting up SES identities for sending email. You may find docs and guides online for how to do this. |
|||
|
|||
In order for Vaultwarden to send emails using SES you must have an SES Email Address Identity that **does not have a default configuration set**. An identity with a default configuration set breaks the IAM permission model set up for the Vaultwarden API Function. |
|||
|
|||
Once you have an SES Identity for the sending email address, run `./deploy.sh` again and provide the email address in the `SMTP_FROM` parameter. |
@ -0,0 +1,9 @@ |
|||
#!/bin/sh -e |
|||
|
|||
echo 'Building template...' |
|||
|
|||
sam build |
|||
|
|||
echo '' |
|||
|
|||
sam deploy --guided |
@ -0,0 +1,12 @@ |
|||
version = 0.1 |
|||
|
|||
[default.global.parameters] |
|||
stack_name = "vaultwarden" |
|||
|
|||
[default.deploy.parameters] |
|||
resolve_s3 = true |
|||
s3_prefix = "vaultwarden" |
|||
confirm_changeset = true |
|||
capabilities = "CAPABILITY_IAM" |
|||
image_repositories = [] |
|||
disable_rollback = true |
@ -0,0 +1,582 @@ |
|||
AWSTemplateFormatVersion: '2010-09-09' |
|||
Description: AWS CloudFormation template for running VaultWarden on AWS serverless services. |
|||
|
|||
Parameters: |
|||
Domain: |
|||
Type: String |
|||
Description: >- |
|||
The domain name for the Vaultwarden instance (e.g. https://example.com). If this parameter or the ACMCertificateArn |
|||
parameter are left empty, the Vaultwarden instance can still be reached at the output CDN domain |
|||
(e.g. https://xxxxxxxx.cloudfront.net). |
|||
AllowedPattern: (https://[a-z0-9.-]+|) |
|||
Default: '' |
|||
ACMCertificateArn: |
|||
Type: String |
|||
Description: The ARN of a us-east-1 ACM certificate to use for the domain. Required if the `Domain` parameter is set. |
|||
AllowedPattern: (arn:aws:acm:us-east-1:[0-9]+:certificate/[0-9a-f-]+|) |
|||
Default: '' |
|||
DSQLClusterId: |
|||
Type: String |
|||
Description: The endpoint of the DSQL database. |
|||
AllowedPattern: '[a-z0-9]+' |
|||
APILogRetention: |
|||
Type: Number |
|||
Description: The number of days to retain the API logs. -1 means to never expire. |
|||
Default: -1 |
|||
AllowedValues: [-1, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1096, 1827, 2192, 2557, 2922, 3288, 3653] |
|||
SignupsAllowed: |
|||
Type: String |
|||
Description: Controls if new users can register |
|||
Default: 'true' |
|||
AllowedValues: ['true', 'false'] |
|||
IconService: |
|||
Type: String |
|||
Description: Allowed icon service sources. |
|||
Default: bitwarden |
|||
AdminToken: |
|||
Type: String |
|||
Description: Token for the admin interface, preferably an Argon2 PCH string. If empty, the admin interface will be disabled. |
|||
Default: '' |
|||
SMTPFrom: |
|||
Type: String |
|||
Description: The email address to send emails from. Email service is disabled if this value is empty. |
|||
Default: '' |
|||
SMTPFromName: |
|||
Type: String |
|||
Description: The name to send emails from. |
|||
Default: Vaultwarden |
|||
|
|||
Mappings: |
|||
IconSource: |
|||
internal: |
|||
CSP: '' |
|||
bitwarden: |
|||
CSP: https://icons.bitwarden.net/ |
|||
duckduckgo: |
|||
CSP: https://icons.duckduckgo.com/ip3/ |
|||
google: |
|||
CSP: https://www.google.com/s2/favicons https://*.gstatic.com/favicon |
|||
|
|||
Conditions: |
|||
IsDomainAndCertificateSet: !And |
|||
- !Not [!Equals [!Ref Domain, '']] |
|||
- !Not [!Equals [!Ref ACMCertificateArn, '']] |
|||
IsApiLogRetentionNeverExpire: !Equals |
|||
- !Ref APILogRetention |
|||
- -1 |
|||
IconSourceIsPredefined: !Or |
|||
- !Equals [!Ref IconService, internal] |
|||
- !Equals [!Ref IconService, bitwarden] |
|||
- !Equals [!Ref IconService, duckduckgo] |
|||
- !Equals [!Ref IconService, google] |
|||
IsAdminTokenEmpty: !Equals |
|||
- !Ref AdminToken |
|||
- '' |
|||
IsEmailEnabled: !Not |
|||
- !Equals |
|||
- !Ref SMTPFrom |
|||
- '' |
|||
|
|||
Resources: |
|||
DataBucket: |
|||
Type: AWS::S3::Bucket |
|||
Properties: |
|||
BucketEncryption: |
|||
ServerSideEncryptionConfiguration: |
|||
- BucketKeyEnabled: true |
|||
ServerSideEncryptionByDefault: |
|||
SSEAlgorithm: aws:kms |
|||
BucketName: !Sub ${AWS::StackName}-${AWS::AccountId}-${AWS::Region}-data |
|||
CorsConfiguration: |
|||
CorsRules: |
|||
- AllowedMethods: |
|||
- GET |
|||
- HEAD |
|||
AllowedOrigins: |
|||
- '*' |
|||
LifecycleConfiguration: |
|||
Rules: |
|||
- AbortIncompleteMultipartUpload: |
|||
DaysAfterInitiation: 2 |
|||
ExpiredObjectDeleteMarker: true |
|||
NoncurrentVersionExpiration: |
|||
NoncurrentDays: 30 |
|||
Status: Enabled |
|||
PublicAccessBlockConfiguration: |
|||
BlockPublicAcls: true |
|||
BlockPublicPolicy: true |
|||
IgnorePublicAcls: true |
|||
RestrictPublicBuckets: true |
|||
VersioningConfiguration: |
|||
Status: Enabled |
|||
|
|||
DataBucketEnforceEncryptionAndStorageTier: |
|||
Type: AWS::S3::BucketPolicy |
|||
Properties: |
|||
Bucket: !Ref DataBucket |
|||
PolicyDocument: |
|||
Version: '2012-10-17' |
|||
Statement: |
|||
- Sid: DenyUnencryptedObjectUploads |
|||
Effect: Deny |
|||
Principal: '*' |
|||
Action: s3:PutObject |
|||
Resource: !Sub arn:${AWS::Partition}:s3:::${DataBucket}/* |
|||
Condition: |
|||
'Null': |
|||
s3:x-amz-server-side-encryption-aws-kms-key-id: true |
|||
- Sid: DenyUnencryptedTransit |
|||
Effect: Deny |
|||
Principal: '*' |
|||
Action: s3:* |
|||
Resource: |
|||
- !Sub arn:${AWS::Partition}:s3:::${DataBucket} |
|||
- !Sub arn:${AWS::Partition}:s3:::${DataBucket}/* |
|||
Condition: |
|||
Bool: |
|||
aws:SecureTransport: false |
|||
- Sid: DenyNonIntelligentTieringStorageClass |
|||
Effect: Deny |
|||
Principal: '*' |
|||
Action: s3:PutObject |
|||
Resource: !Sub arn:aws:s3:::${DataBucket}/* |
|||
Condition: |
|||
StringNotEquals: |
|||
s3:x-amz-storage-class: INTELLIGENT_TIERING |
|||
|
|||
ApiFunctionRole: |
|||
Type: AWS::IAM::Role |
|||
Properties: |
|||
AssumeRolePolicyDocument: |
|||
Version: '2012-10-17' |
|||
Statement: |
|||
- Action: sts:AssumeRole |
|||
Effect: Allow |
|||
Principal: |
|||
Service: lambda.amazonaws.com |
|||
ManagedPolicyArns: |
|||
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole |
|||
Policies: |
|||
- PolicyName: AccessAWSServices |
|||
PolicyDocument: |
|||
Version: '2012-10-17' |
|||
Statement: |
|||
- Effect: Allow |
|||
Action: |
|||
- s3:GetObject |
|||
- s3:ListBucket |
|||
- s3:PutObject |
|||
- s3:DeleteObject |
|||
Resource: |
|||
- !Sub arn:${AWS::Partition}:s3:::${DataBucket} |
|||
- !Sub arn:${AWS::Partition}:s3:::${DataBucket}/* |
|||
- Effect: Allow |
|||
Action: dsql:DbConnectAdmin |
|||
Resource: !Sub arn:${AWS::Partition}:dsql:${AWS::Region}:${AWS::AccountId}:cluster/${DSQLClusterId} |
|||
- !If |
|||
- IsEmailEnabled |
|||
- Effect: Allow |
|||
Action: ses:SendRawEmail |
|||
Resource: '*' |
|||
Condition: |
|||
StringEquals: |
|||
ses:FromAddress: !Ref SMTPFrom |
|||
ses:FromDisplayName: !Ref SMTPFromName |
|||
- !Ref AWS::NoValue |
|||
|
|||
ApiFunction: |
|||
Type: AWS::Lambda::Function |
|||
Properties: |
|||
Architectures: |
|||
- arm64 |
|||
Code: ./vaultwarden-lambda.zip |
|||
Environment: |
|||
Variables: |
|||
AWS_LWA_PORT: 8000 |
|||
AWS_LWA_READINESS_CHECK_PATH: /alive |
|||
AWS_LWA_ASYNC_INIT: true |
|||
AWS_LWA_ENABLE_COMPRESSION: true |
|||
AWS_LWA_INVOKE_MODE: RESPONSE_STREAM |
|||
DATA_FOLDER: !Sub s3://${DataBucket} |
|||
TMP_FOLDER: /tmp |
|||
DATABASE_URL: !Sub dsql://${DSQLClusterId}.dsql.${AWS::Region}.on.aws |
|||
ENABLE_WEBSOCKET: false |
|||
DOMAIN: !If |
|||
- IsDomainAndCertificateSet |
|||
- !Ref Domain |
|||
- !Ref AWS::NoValue |
|||
SIGNUPS_ALLOWED: !Ref SignupsAllowed |
|||
IP_HEADER: X-Forwarded-For |
|||
ICON_SERVICE: !Ref IconService |
|||
ICON_REDIRECT_CODE: 301 |
|||
ADMIN_TOKEN: !If |
|||
- IsAdminTokenEmpty |
|||
- !Ref AWS::NoValue |
|||
- !Ref AdminToken |
|||
SMTP_FROM: !If |
|||
- IsEmailEnabled |
|||
- !Ref SMTPFrom |
|||
- !Ref AWS::NoValue |
|||
SMTP_FROM_NAME: !Ref SMTPFromName |
|||
USE_AWS_SES: true |
|||
FunctionName: !Sub ${AWS::StackName}-api |
|||
Handler: bootstrap |
|||
Layers: |
|||
- !Sub arn:aws:lambda:${AWS::Region}:753240598075:layer:LambdaAdapterLayerArm64:24 |
|||
MemorySize: 3008 # Maximum value allowed for new accounts, higher value reduces cold start times, should still fit under free tier usage for personal use |
|||
Role: !GetAtt ApiFunctionRole.Arn |
|||
Runtime: provided.al2023 |
|||
Timeout: 300 |
|||
|
|||
ApiFunctionLogs: |
|||
Type: AWS::Logs::LogGroup |
|||
DeletionPolicy: RetainExceptOnCreate |
|||
Properties: |
|||
LogGroupName: !Sub /aws/lambda/${ApiFunction} |
|||
RetentionInDays: !If |
|||
- IsApiLogRetentionNeverExpire |
|||
- !Ref AWS::NoValue |
|||
- !Ref APILogRetention |
|||
|
|||
ApiFunctionUrl: |
|||
Type: AWS::Lambda::Url |
|||
Properties: |
|||
TargetFunctionArn: !Ref ApiFunction |
|||
AuthType: NONE |
|||
InvokeMode: RESPONSE_STREAM |
|||
|
|||
ApiFunctionUrlPublicPermissions: |
|||
Type: AWS::Lambda::Permission |
|||
Properties: |
|||
Action: lambda:InvokeFunctionUrl |
|||
FunctionName: !Ref ApiFunction |
|||
Principal: '*' |
|||
FunctionUrlAuthType: NONE |
|||
|
|||
WebVaultAssetsBucket: |
|||
Type: AWS::S3::Bucket |
|||
Properties: |
|||
BucketName: !Sub ${AWS::StackName}-${AWS::AccountId}-${AWS::Region}-web-vault |
|||
PublicAccessBlockConfiguration: |
|||
BlockPublicAcls: true |
|||
BlockPublicPolicy: true |
|||
IgnorePublicAcls: true |
|||
RestrictPublicBuckets: true |
|||
|
|||
WebVaultAssetsBucketEnforceEncryptionInTransitAndStorageTier: |
|||
Type: AWS::S3::BucketPolicy |
|||
Properties: |
|||
Bucket: !Ref DataBucket |
|||
PolicyDocument: |
|||
Version: '2012-10-17' |
|||
Statement: |
|||
- Sid: DenyUnencryptedTransit |
|||
Effect: Deny |
|||
Principal: '*' |
|||
Action: s3:* |
|||
Resource: |
|||
- !Sub arn:${AWS::Partition}:s3:::${DataBucket} |
|||
- !Sub arn:${AWS::Partition}:s3:::${DataBucket}/* |
|||
Condition: |
|||
Bool: |
|||
aws:SecureTransport: false |
|||
- Sid: DenyNonIntelligentTieringStorageClass |
|||
Effect: Deny |
|||
Principal: '*' |
|||
Action: s3:PutObject |
|||
Resource: !Sub arn:aws:s3:::${DataBucket}/* |
|||
Condition: |
|||
StringNotEquals: |
|||
s3:x-amz-storage-class: INTELLIGENT_TIERING |
|||
|
|||
WebVaultAssetsBucketOriginAccessControl: |
|||
Type: AWS::CloudFront::OriginAccessControl |
|||
Properties: |
|||
OriginAccessControlConfig: |
|||
Name: !Sub ${AWS::StackName}-${AWS::Region}-web-vault-access-control |
|||
OriginAccessControlOriginType: s3 |
|||
SigningBehavior: always |
|||
SigningProtocol: sigv4 |
|||
|
|||
# The following mirrors the header values in util.rs |
|||
ResponseHeaderPolicy: |
|||
Type: AWS::CloudFront::ResponseHeadersPolicy |
|||
Properties: |
|||
ResponseHeadersPolicyConfig: |
|||
Name: !Sub ${AWS::StackName}-${AWS::Region} |
|||
CustomHeadersConfig: |
|||
Items: |
|||
- Header: Cache-Control |
|||
Override: false |
|||
Value: no-cache, no-store, max-age=0 |
|||
- Header: X-Robots-Tag |
|||
Override: true |
|||
Value: noindex, nofollow |
|||
- Header: Permissions-Policy |
|||
Override: true |
|||
Value: accelerometer=(), ambient-light-sensor=(), autoplay=(), battery=(), camera=(), display-capture=(), document-domain=(), encrypted-media=(), execution-while-not-rendered=(), execution-while-out-of-viewport=(), fullscreen=(), geolocation=(), gyroscope=(), keyboard-map=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), screen-wake-lock=(), sync-xhr=(), usb=(), web-share=(), xr-spatial-tracking=() |
|||
SecurityHeadersConfig: |
|||
ContentSecurityPolicy: |
|||
ContentSecurityPolicy: !Sub |
|||
- >- |
|||
default-src 'self'; |
|||
base-uri 'self'; |
|||
form-action 'self'; |
|||
object-src 'self' blob:; |
|||
script-src 'self' 'wasm-unsafe-eval'; |
|||
style-src 'self' 'unsafe-inline'; |
|||
child-src 'self' https://*.duosecurity.com https://*.duofederal.com; |
|||
frame-src 'self' https://*.duosecurity.com https://*.duofederal.com; |
|||
frame-ancestors 'self' |
|||
chrome-extension://nngceckbapebfimnlniiiahkandclblb |
|||
chrome-extension://jbkfoedolllekgbhcbcoahefnbanhhlh |
|||
moz-extension://*; |
|||
img-src 'self' data: |
|||
https://haveibeenpwned.com |
|||
${IconServiceCSP}; |
|||
connect-src 'self' |
|||
https://api.pwnedpasswords.com |
|||
https://api.2fa.directory |
|||
https://app.simplelogin.io/api/ |
|||
https://app.addy.io/api/ |
|||
https://api.fastmail.com/ |
|||
https://api.forwardemail.net |
|||
https://${DataBucket.RegionalDomainName}; |
|||
- IconServiceCSP: !If |
|||
- IconSourceIsPredefined |
|||
- !FindInMap [IconSource, !Ref IconService, CSP] |
|||
- !Select |
|||
- 0 |
|||
- !Split ['{', !Ref IconService] |
|||
Override: true |
|||
ContentTypeOptions: |
|||
Override: true |
|||
FrameOptions: |
|||
FrameOption: SAMEORIGIN |
|||
Override: true |
|||
ReferrerPolicy: |
|||
Override: true |
|||
ReferrerPolicy: same-origin |
|||
StrictTransportSecurity: |
|||
AccessControlMaxAgeSec: 63072000 |
|||
IncludeSubdomains: true |
|||
Override: true |
|||
Preload: true |
|||
XSSProtection: |
|||
Override: true |
|||
Protection: false |
|||
|
|||
# The following mirrors the header values in util.rs |
|||
ConnectorHtmlResponseHeaderPolicy: |
|||
Type: AWS::CloudFront::ResponseHeadersPolicy |
|||
Properties: |
|||
ResponseHeadersPolicyConfig: |
|||
Name: !Sub ${AWS::StackName}-${AWS::Region}-connector-html |
|||
CustomHeadersConfig: |
|||
Items: |
|||
- Header: Cache-Control |
|||
Override: true |
|||
Value: no-cache, no-store, max-age=0 |
|||
- Header: X-Robots-Tag |
|||
Override: true |
|||
Value: noindex, nofollow |
|||
- Header: Permissions-Policy |
|||
Override: true |
|||
Value: accelerometer=(), ambient-light-sensor=(), autoplay=(), battery=(), camera=(), display-capture=(), document-domain=(), encrypted-media=(), execution-while-not-rendered=(), execution-while-out-of-viewport=(), fullscreen=(), geolocation=(), gyroscope=(), keyboard-map=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(), screen-wake-lock=(), sync-xhr=(), usb=(), web-share=(), xr-spatial-tracking=() |
|||
SecurityHeadersConfig: |
|||
ContentTypeOptions: |
|||
Override: true |
|||
ReferrerPolicy: |
|||
Override: true |
|||
ReferrerPolicy: same-origin |
|||
StrictTransportSecurity: |
|||
AccessControlMaxAgeSec: 63072000 |
|||
IncludeSubdomains: true |
|||
Override: true |
|||
Preload: true |
|||
XSSProtection: |
|||
Override: true |
|||
Protection: false |
|||
|
|||
CDN: |
|||
Type: AWS::CloudFront::Distribution |
|||
Properties: |
|||
DistributionConfig: |
|||
Aliases: !If |
|||
- IsDomainAndCertificateSet |
|||
- - !Select |
|||
- 2 |
|||
- !Split |
|||
- / |
|||
- !Ref Domain |
|||
- !Ref AWS::NoValue |
|||
CacheBehaviors: |
|||
- AllowedMethods: |
|||
- DELETE |
|||
- HEAD |
|||
- GET |
|||
- OPTIONS |
|||
- PATCH |
|||
- POST |
|||
- PUT |
|||
CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # CachingDisabled |
|||
Compress: true |
|||
OriginRequestPolicyId: b689b0a8-53d0-40ab-baf2-68738e2966ac # AllViewerExceptHostHeader |
|||
PathPattern: /api/* |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: Api |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
- AllowedMethods: |
|||
- DELETE |
|||
- HEAD |
|||
- GET |
|||
- OPTIONS |
|||
- PATCH |
|||
- POST |
|||
- PUT |
|||
CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # CachingDisabled |
|||
Compress: true |
|||
OriginRequestPolicyId: b689b0a8-53d0-40ab-baf2-68738e2966ac # AllViewerExceptHostHeader |
|||
PathPattern: /admin |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: Api |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
- AllowedMethods: |
|||
- DELETE |
|||
- HEAD |
|||
- GET |
|||
- OPTIONS |
|||
- PATCH |
|||
- POST |
|||
- PUT |
|||
CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # CachingDisabled |
|||
Compress: true |
|||
OriginRequestPolicyId: b689b0a8-53d0-40ab-baf2-68738e2966ac # AllViewerExceptHostHeader |
|||
PathPattern: /admin/* |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: Api |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
- AllowedMethods: |
|||
- DELETE |
|||
- HEAD |
|||
- GET |
|||
- OPTIONS |
|||
- PATCH |
|||
- POST |
|||
- PUT |
|||
CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # CachingDisabled |
|||
Compress: true |
|||
OriginRequestPolicyId: b689b0a8-53d0-40ab-baf2-68738e2966ac # AllViewerExceptHostHeader |
|||
PathPattern: /events/* |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: Api |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
- AllowedMethods: |
|||
- DELETE |
|||
- HEAD |
|||
- GET |
|||
- OPTIONS |
|||
- PATCH |
|||
- POST |
|||
- PUT |
|||
CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # CachingDisabled |
|||
Compress: true |
|||
OriginRequestPolicyId: b689b0a8-53d0-40ab-baf2-68738e2966ac # AllViewerExceptHostHeader |
|||
PathPattern: /identity/* |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: Api |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
- CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # CachingDisabled |
|||
Compress: true |
|||
OriginRequestPolicyId: b689b0a8-53d0-40ab-baf2-68738e2966ac # AllViewerExceptHostHeader |
|||
PathPattern: /css/* |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: Api |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
- CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # CachingDisabled |
|||
Compress: true |
|||
OriginRequestPolicyId: b689b0a8-53d0-40ab-baf2-68738e2966ac # AllViewerExceptHostHeader |
|||
PathPattern: /vw_static/* |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: Api |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
- CachePolicyId: 658327ea-f89d-4fab-a63d-7e88639e58f6 # CachingOptimized |
|||
Compress: true |
|||
OriginRequestPolicyId: b689b0a8-53d0-40ab-baf2-68738e2966ac # AllViewerExceptHostHeader |
|||
PathPattern: /icons/* |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: Api |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
- CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # CachingDisabled |
|||
Compress: true |
|||
PathPattern: '*.html' |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: WebVaultAssetsBucket |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
- CachePolicyId: 4135ea2d-6df8-44a3-9df3-4b5a84be39ad # CachingDisabled |
|||
Compress: true |
|||
PathPattern: '*connector.html' |
|||
ResponseHeadersPolicyId: !Ref ConnectorHtmlResponseHeaderPolicy |
|||
TargetOriginId: WebVaultAssetsBucket |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
Comment: Vaultwarden CDN |
|||
CustomErrorResponses: |
|||
- ErrorCode: 403 |
|||
ResponseCode: 200 |
|||
ResponsePagePath: /404.html |
|||
DefaultCacheBehavior: |
|||
CachePolicyId: 658327ea-f89d-4fab-a63d-7e88639e58f6 # CachingOptimized |
|||
Compress: true |
|||
ResponseHeadersPolicyId: !Ref ResponseHeaderPolicy |
|||
TargetOriginId: WebVaultAssetsBucket |
|||
ViewerProtocolPolicy: redirect-to-https |
|||
DefaultRootObject: index.html |
|||
Enabled: true |
|||
HttpVersion: http2and3 |
|||
IPV6Enabled: true |
|||
Origins: |
|||
- Id: WebVaultAssetsBucket |
|||
DomainName: !GetAtt WebVaultAssetsBucket.RegionalDomainName |
|||
OriginAccessControlId: !GetAtt WebVaultAssetsBucketOriginAccessControl.Id |
|||
S3OriginConfig: |
|||
OriginAccessIdentity: '' |
|||
- Id: Api |
|||
CustomOriginConfig: |
|||
OriginProtocolPolicy: https-only |
|||
OriginSSLProtocols: |
|||
- TLSv1.2 |
|||
DomainName: !Select |
|||
- 2 |
|||
- !Split |
|||
- / |
|||
- !GetAtt ApiFunctionUrl.FunctionUrl |
|||
PriceClass: PriceClass_All |
|||
ViewerCertificate: !If |
|||
- IsDomainAndCertificateSet |
|||
- AcmCertificateArn: !Ref ACMCertificateArn |
|||
MinimumProtocolVersion: TLSv1.2_2021 |
|||
SslSupportMethod: sni-only |
|||
- !Ref AWS::NoValue |
|||
|
|||
WebVaultAssetsBucketPolicyCloudFrontAccess: |
|||
Type: AWS::S3::BucketPolicy |
|||
Properties: |
|||
Bucket: !Ref WebVaultAssetsBucket |
|||
PolicyDocument: |
|||
Id: CloudFrontAccess |
|||
Version: '2012-10-17' |
|||
Statement: |
|||
- Principal: |
|||
Service: !Sub cloudfront.${AWS::URLSuffix} |
|||
Action: s3:GetObject |
|||
Effect: Allow |
|||
Resource: !Sub ${WebVaultAssetsBucket.Arn}/* |
|||
Condition: |
|||
StringEquals: |
|||
AWS:SourceArn: !Sub arn:${AWS::Partition}:cloudfront::${AWS::AccountId}:distribution/${CDN.Id} |
|||
|
|||
Outputs: |
|||
WebVaultAssetsBucket: |
|||
Value: !Ref WebVaultAssetsBucket |
|||
CDNDomain: |
|||
Value: !GetAtt CDN.DomainName |
@ -0,0 +1 @@ |
|||
run_in_transaction = false |
@ -0,0 +1,281 @@ |
|||
CREATE TABLE attachments ( |
|||
id text NOT NULL PRIMARY KEY, |
|||
cipher_uuid character varying(40) NOT NULL, |
|||
file_name text NOT NULL, |
|||
file_size bigint NOT NULL, |
|||
akey text |
|||
); |
|||
|
|||
CREATE TABLE auth_requests ( |
|||
uuid character(36) NOT NULL PRIMARY KEY, |
|||
user_uuid character(36) NOT NULL, |
|||
organization_uuid character(36), |
|||
request_device_identifier character(36) NOT NULL, |
|||
device_type integer NOT NULL, |
|||
request_ip text NOT NULL, |
|||
response_device_id character(36), |
|||
access_code text NOT NULL, |
|||
public_key text NOT NULL, |
|||
enc_key text, |
|||
master_password_hash text, |
|||
approved boolean, |
|||
creation_date timestamp without time zone NOT NULL, |
|||
response_date timestamp without time zone, |
|||
authentication_date timestamp without time zone |
|||
); |
|||
|
|||
CREATE TABLE ciphers ( |
|||
uuid character varying(40) NOT NULL PRIMARY KEY, |
|||
created_at timestamp without time zone NOT NULL, |
|||
updated_at timestamp without time zone NOT NULL, |
|||
user_uuid character varying(40), |
|||
organization_uuid character varying(40), |
|||
atype integer NOT NULL, |
|||
name text NOT NULL, |
|||
notes text, |
|||
fields text, |
|||
data text NOT NULL, |
|||
password_history text, |
|||
deleted_at timestamp without time zone, |
|||
reprompt integer, |
|||
key text |
|||
); |
|||
|
|||
CREATE TABLE ciphers_collections ( |
|||
cipher_uuid character varying(40) NOT NULL, |
|||
collection_uuid character varying(40) NOT NULL, |
|||
PRIMARY KEY (cipher_uuid, collection_uuid) |
|||
); |
|||
|
|||
CREATE TABLE collections ( |
|||
uuid character varying(40) NOT NULL PRIMARY KEY, |
|||
org_uuid character varying(40) NOT NULL, |
|||
name text NOT NULL, |
|||
external_id text |
|||
); |
|||
|
|||
CREATE TABLE collections_groups ( |
|||
collections_uuid character varying(40) NOT NULL, |
|||
groups_uuid character(36) NOT NULL, |
|||
read_only boolean NOT NULL, |
|||
hide_passwords boolean NOT NULL, |
|||
PRIMARY KEY (collections_uuid, groups_uuid) |
|||
); |
|||
|
|||
CREATE TABLE devices ( |
|||
uuid character varying(40) NOT NULL, |
|||
created_at timestamp without time zone NOT NULL, |
|||
updated_at timestamp without time zone NOT NULL, |
|||
user_uuid character varying(40) NOT NULL, |
|||
name text NOT NULL, |
|||
atype integer NOT NULL, |
|||
push_token text, |
|||
refresh_token text NOT NULL, |
|||
twofactor_remember text, |
|||
push_uuid text, |
|||
PRIMARY KEY (uuid, user_uuid) |
|||
); |
|||
|
|||
CREATE TABLE emergency_access ( |
|||
uuid character(36) NOT NULL PRIMARY KEY, |
|||
grantor_uuid character(36), |
|||
grantee_uuid character(36), |
|||
email character varying(255), |
|||
key_encrypted text, |
|||
atype integer NOT NULL, |
|||
status integer NOT NULL, |
|||
wait_time_days integer NOT NULL, |
|||
recovery_initiated_at timestamp without time zone, |
|||
last_notification_at timestamp without time zone, |
|||
updated_at timestamp without time zone NOT NULL, |
|||
created_at timestamp without time zone NOT NULL |
|||
); |
|||
|
|||
CREATE TABLE event ( |
|||
uuid character(36) NOT NULL PRIMARY KEY, |
|||
event_type integer NOT NULL, |
|||
user_uuid character(36), |
|||
org_uuid character(36), |
|||
cipher_uuid character(36), |
|||
collection_uuid character(36), |
|||
group_uuid character(36), |
|||
org_user_uuid character(36), |
|||
act_user_uuid character(36), |
|||
device_type integer, |
|||
ip_address text, |
|||
event_date timestamp without time zone NOT NULL, |
|||
policy_uuid character(36), |
|||
provider_uuid character(36), |
|||
provider_user_uuid character(36), |
|||
provider_org_uuid character(36) |
|||
); |
|||
|
|||
CREATE TABLE favorites ( |
|||
user_uuid character varying(40) NOT NULL, |
|||
cipher_uuid character varying(40) NOT NULL, |
|||
PRIMARY KEY (user_uuid, cipher_uuid) |
|||
); |
|||
|
|||
CREATE TABLE folders ( |
|||
uuid character varying(40) NOT NULL PRIMARY KEY, |
|||
created_at timestamp without time zone NOT NULL, |
|||
updated_at timestamp without time zone NOT NULL, |
|||
user_uuid character varying(40) NOT NULL, |
|||
name text NOT NULL |
|||
); |
|||
|
|||
CREATE TABLE folders_ciphers ( |
|||
cipher_uuid character varying(40) NOT NULL, |
|||
folder_uuid character varying(40) NOT NULL, |
|||
PRIMARY KEY (cipher_uuid, folder_uuid) |
|||
); |
|||
|
|||
CREATE TABLE groups ( |
|||
uuid character(36) NOT NULL PRIMARY KEY, |
|||
organizations_uuid character varying(40) NOT NULL, |
|||
name character varying(100) NOT NULL, |
|||
access_all boolean NOT NULL, |
|||
external_id character varying(300), |
|||
creation_date timestamp without time zone NOT NULL, |
|||
revision_date timestamp without time zone NOT NULL |
|||
); |
|||
|
|||
CREATE TABLE groups_users ( |
|||
groups_uuid character(36) NOT NULL, |
|||
users_organizations_uuid character varying(36) NOT NULL, |
|||
PRIMARY KEY (groups_uuid, users_organizations_uuid) |
|||
); |
|||
|
|||
CREATE TABLE invitations ( |
|||
email text NOT NULL PRIMARY KEY |
|||
); |
|||
|
|||
CREATE TABLE org_policies ( |
|||
uuid character(36) NOT NULL PRIMARY KEY, |
|||
org_uuid character(36) NOT NULL, |
|||
atype integer NOT NULL, |
|||
enabled boolean NOT NULL, |
|||
data text NOT NULL, |
|||
UNIQUE (org_uuid, atype) |
|||
); |
|||
|
|||
CREATE TABLE organization_api_key ( |
|||
uuid character(36) NOT NULL, |
|||
org_uuid character(36) NOT NULL, |
|||
atype integer NOT NULL, |
|||
api_key character varying(255), |
|||
revision_date timestamp without time zone NOT NULL, |
|||
PRIMARY KEY (uuid, org_uuid) |
|||
); |
|||
|
|||
CREATE TABLE organizations ( |
|||
uuid character varying(40) NOT NULL PRIMARY KEY, |
|||
name text NOT NULL, |
|||
billing_email text NOT NULL, |
|||
private_key text, |
|||
public_key text |
|||
); |
|||
|
|||
CREATE TABLE sends ( |
|||
uuid character(36) NOT NULL PRIMARY KEY, |
|||
user_uuid character(36), |
|||
organization_uuid character(36), |
|||
name text NOT NULL, |
|||
notes text, |
|||
atype integer NOT NULL, |
|||
data text NOT NULL, |
|||
akey text NOT NULL, |
|||
password_hash bytea, |
|||
password_salt bytea, |
|||
password_iter integer, |
|||
max_access_count integer, |
|||
access_count integer NOT NULL, |
|||
creation_date timestamp without time zone NOT NULL, |
|||
revision_date timestamp without time zone NOT NULL, |
|||
expiration_date timestamp without time zone, |
|||
deletion_date timestamp without time zone NOT NULL, |
|||
disabled boolean NOT NULL, |
|||
hide_email boolean |
|||
); |
|||
|
|||
CREATE TABLE twofactor ( |
|||
uuid character varying(40) NOT NULL PRIMARY KEY, |
|||
user_uuid character varying(40) NOT NULL, |
|||
atype integer NOT NULL, |
|||
enabled boolean NOT NULL, |
|||
data text NOT NULL, |
|||
last_used bigint DEFAULT 0 NOT NULL, |
|||
UNIQUE (user_uuid, atype) |
|||
); |
|||
|
|||
CREATE TABLE twofactor_duo_ctx ( |
|||
state character varying(64) NOT NULL PRIMARY KEY, |
|||
user_email character varying(255) NOT NULL, |
|||
nonce character varying(64) NOT NULL, |
|||
exp bigint NOT NULL |
|||
); |
|||
|
|||
CREATE TABLE twofactor_incomplete ( |
|||
user_uuid character varying(40) NOT NULL, |
|||
device_uuid character varying(40) NOT NULL, |
|||
device_name text NOT NULL, |
|||
login_time timestamp without time zone NOT NULL, |
|||
ip_address text NOT NULL, |
|||
device_type integer DEFAULT 14 NOT NULL, |
|||
PRIMARY KEY (user_uuid, device_uuid) |
|||
); |
|||
|
|||
CREATE TABLE users ( |
|||
uuid character varying(40) NOT NULL PRIMARY KEY, |
|||
created_at timestamp without time zone NOT NULL, |
|||
updated_at timestamp without time zone NOT NULL, |
|||
email text NOT NULL UNIQUE, |
|||
name text NOT NULL, |
|||
password_hash bytea NOT NULL, |
|||
salt bytea NOT NULL, |
|||
password_iterations integer NOT NULL, |
|||
password_hint text, |
|||
akey text NOT NULL, |
|||
private_key text, |
|||
public_key text, |
|||
totp_secret text, |
|||
totp_recover text, |
|||
security_stamp text NOT NULL, |
|||
equivalent_domains text NOT NULL, |
|||
excluded_globals text NOT NULL, |
|||
client_kdf_type integer DEFAULT 0 NOT NULL, |
|||
client_kdf_iter integer DEFAULT 100000 NOT NULL, |
|||
verified_at timestamp without time zone, |
|||
last_verifying_at timestamp without time zone, |
|||
login_verify_count integer DEFAULT 0 NOT NULL, |
|||
email_new character varying(255) DEFAULT NULL::character varying, |
|||
email_new_token character varying(16) DEFAULT NULL::character varying, |
|||
enabled boolean DEFAULT true NOT NULL, |
|||
stamp_exception text, |
|||
api_key text, |
|||
avatar_color text, |
|||
client_kdf_memory integer, |
|||
client_kdf_parallelism integer, |
|||
external_id text |
|||
); |
|||
|
|||
CREATE TABLE users_collections ( |
|||
user_uuid character varying(40) NOT NULL, |
|||
collection_uuid character varying(40) NOT NULL, |
|||
read_only boolean DEFAULT false NOT NULL, |
|||
hide_passwords boolean DEFAULT false NOT NULL, |
|||
PRIMARY KEY (user_uuid, collection_uuid) |
|||
); |
|||
|
|||
CREATE TABLE users_organizations ( |
|||
uuid character varying(40) NOT NULL PRIMARY KEY, |
|||
user_uuid character varying(40) NOT NULL, |
|||
org_uuid character varying(40) NOT NULL, |
|||
access_all boolean NOT NULL, |
|||
akey text NOT NULL, |
|||
status integer NOT NULL, |
|||
atype integer NOT NULL, |
|||
reset_password_key text, |
|||
external_id text, |
|||
UNIQUE (user_uuid, org_uuid) |
|||
); |
@ -0,0 +1 @@ |
|||
run_in_transaction = false |
@ -0,0 +1,8 @@ |
|||
-- DSQL preview can't add columns with constraints, dropping `NOT NULL DEFAULT FALSE` constraint |
|||
-- It appears Diesel will ensure the column has appropriate values when saving records. |
|||
|
|||
ALTER TABLE users_collections |
|||
ADD COLUMN manage BOOLEAN; |
|||
|
|||
ALTER TABLE collections_groups |
|||
ADD COLUMN manage BOOLEAN; |
@ -0,0 +1,24 @@ |
|||
use std::io::{Error, ErrorKind}; |
|||
|
|||
// Cache the AWS SDK config, as recommended by the AWS SDK documentation. The
|
|||
// initial load is async, so we spawn a thread to load it and then join it to
|
|||
// get the result in a blocking fashion.
|
|||
static AWS_SDK_CONFIG: std::sync::LazyLock<std::io::Result<aws_config::SdkConfig>> = std::sync::LazyLock::new(|| { |
|||
std::thread::spawn(|| { |
|||
let rt = tokio::runtime::Builder::new_current_thread() |
|||
.enable_all() |
|||
.build()?; |
|||
|
|||
std::io::Result::Ok(rt.block_on(aws_config::load_defaults(aws_config::BehaviorVersion::latest()))) |
|||
}) |
|||
.join() |
|||
.map_err(|e| Error::new(ErrorKind::Other, format!("Failed to load AWS config for DSQL connection: {e:#?}")))? |
|||
.map_err(|e| Error::new(ErrorKind::Other, format!("Failed to load AWS config for DSQL connection: {e}"))) |
|||
}); |
|||
|
|||
pub(crate) fn aws_sdk_config() -> std::io::Result<&'static aws_config::SdkConfig> { |
|||
(*AWS_SDK_CONFIG).as_ref().map_err(|e| match e.get_ref() { |
|||
Some(inner) => Error::new(e.kind(), inner), |
|||
None => Error::from(e.kind()), |
|||
}) |
|||
} |
@ -0,0 +1,163 @@ |
|||
use std::sync::RwLock; |
|||
|
|||
use diesel::{ |
|||
r2d2::{ManageConnection, R2D2Connection}, |
|||
ConnectionError, |
|||
}; |
|||
use url::Url; |
|||
|
|||
#[derive(Debug)] |
|||
pub struct ConnectionManager<T> { |
|||
inner: RwLock<diesel::r2d2::ConnectionManager<T>>, |
|||
#[cfg(dsql)] |
|||
dsql_url: Option<String>, |
|||
} |
|||
|
|||
impl<T> ConnectionManager<T> { |
|||
/// Returns a new connection manager,
|
|||
/// which establishes connections to the given database URL.
|
|||
pub fn new<S: Into<String>>(database_url: S) -> Self { |
|||
let database_url = database_url.into(); |
|||
|
|||
Self { |
|||
inner: RwLock::new(diesel::r2d2::ConnectionManager::new(&database_url)), |
|||
#[cfg(dsql)] |
|||
dsql_url: if database_url.starts_with("dsql:") { |
|||
Some(database_url) |
|||
} else { |
|||
None |
|||
}, |
|||
} |
|||
} |
|||
} |
|||
|
|||
impl<T> ManageConnection for ConnectionManager<T> |
|||
where |
|||
T: R2D2Connection + Send + 'static, |
|||
{ |
|||
type Connection = T; |
|||
type Error = diesel::r2d2::Error; |
|||
|
|||
fn connect(&self) -> Result<T, Self::Error> { |
|||
#[cfg(dsql)] |
|||
if let Some(dsql_url) = &self.dsql_url { |
|||
let url = psql_url(dsql_url).map_err(|e| Self::Error::ConnectionError(e))?; |
|||
self.inner.write().expect("Failed to lock inner connection manager to set DSQL connection URL").update_database_url(&url); |
|||
} |
|||
|
|||
self.inner.read().expect("Failed to lock inner connection manager to connect").connect() |
|||
} |
|||
|
|||
fn is_valid(&self, conn: &mut T) -> Result<(), Self::Error> { |
|||
self.inner.read().expect("Failed to lock inner connection manager to check validity").is_valid(conn) |
|||
} |
|||
|
|||
fn has_broken(&self, conn: &mut T) -> bool { |
|||
self.inner.read().expect("Failed to lock inner connection manager to check if has broken").has_broken(conn) |
|||
} |
|||
} |
|||
|
|||
// Generate a Postgres libpq connection string. The input connection string has
|
|||
// the following format:
|
|||
//
|
|||
// dsql://<dsql-id>.dsql.<aws-region>.on.aws
|
|||
//
|
|||
// The generated connection string will have the form:
|
|||
//
|
|||
// postgresql://<dsql-id>.dsql.<aws-region>.on.aws/postgres?sslmode=require&user=admin&password=<auth-token>
|
|||
//
|
|||
// The auth token is a temporary token generated by the AWS SDK for DSQL. It is
|
|||
// valid for up to 15 minutes. We cache the last-generated token for each unique
|
|||
// DSQL connection URL, and reuse it if it is less than 14 minutes old.
|
|||
pub(crate) fn psql_url(url: &str) -> Result<String, ConnectionError> { |
|||
use std::{ |
|||
collections::HashMap, |
|||
sync::{Arc, LazyLock, Mutex}, |
|||
time::Duration, |
|||
}; |
|||
|
|||
struct PsqlUrl { |
|||
timestamp: std::time::Instant, |
|||
url: String, |
|||
} |
|||
|
|||
static PSQL_URLS: LazyLock<Mutex<HashMap<String, Arc<Mutex<Option<PsqlUrl>>>>>> = LazyLock::new(|| Mutex::new(HashMap::new())); |
|||
|
|||
let mut psql_urls = PSQL_URLS.lock().map_err(|e| ConnectionError::BadConnection(format!("Failed to lock PSQL URLs: {e}")))?; |
|||
|
|||
let psql_url_lock = if let Some(existing_psql_url_lock) = psql_urls.get(url) { |
|||
existing_psql_url_lock.clone() |
|||
} else { |
|||
let psql_url_lock = Arc::new(Mutex::new(None)); |
|||
psql_urls.insert(url.to_string(), psql_url_lock.clone()); |
|||
psql_url_lock |
|||
}; |
|||
|
|||
let mut psql_url_lock_guard = psql_url_lock.lock().map_err(|e| ConnectionError::BadConnection(format!("Failed to lock PSQL url: {e}")))?; |
|||
|
|||
drop(psql_urls); |
|||
|
|||
if let Some(ref psql_url) = *psql_url_lock_guard { |
|||
if psql_url.timestamp.elapsed() < Duration::from_secs(14 * 60) { |
|||
debug!("Reusing DSQL auth token for connection '{url}'"); |
|||
return Ok(psql_url.url.clone()); |
|||
} |
|||
|
|||
info!("Refreshing DSQL auth token for connection '{url}'"); |
|||
} else { |
|||
info!("Generating new DSQL auth token for connection '{url}'"); |
|||
} |
|||
|
|||
let sdk_config = crate::aws::aws_sdk_config() |
|||
.map_err(|e| ConnectionError::BadConnection(format!("Failed to load AWS SDK config: {e}")))?; |
|||
|
|||
let mut psql_url = Url::parse(url).map_err(|e| { |
|||
ConnectionError::InvalidConnectionUrl(e.to_string()) |
|||
})?; |
|||
|
|||
let host = psql_url.host_str().ok_or(ConnectionError::InvalidConnectionUrl("Missing hostname in connection URL".to_string()))?.to_string(); |
|||
|
|||
static DSQL_REGION_FROM_HOST_RE: LazyLock<regex::Regex> = LazyLock::new(|| { |
|||
regex::Regex::new(r"^[a-z0-9]+\.dsql\.(?P<region>[a-z0-9-]+)\.on\.aws$").expect("Failed to compile DSQL region regex") |
|||
}); |
|||
|
|||
let region = (*DSQL_REGION_FROM_HOST_RE).captures(&host).ok_or(ConnectionError::InvalidConnectionUrl("Failed to find AWS region in DSQL hostname".to_string()))? |
|||
.name("region") |
|||
.ok_or(ConnectionError::InvalidConnectionUrl("Failed to find AWS region in DSQL hostname".to_string()))? |
|||
.as_str() |
|||
.to_string(); |
|||
|
|||
let region = aws_config::Region::new(region); |
|||
|
|||
let auth_config = aws_sdk_dsql::auth_token::Config::builder() |
|||
.hostname(host) |
|||
.region(region) |
|||
.build() |
|||
.map_err(|e| ConnectionError::BadConnection(format!("Failed to build AWS auth token signer config: {e}")))?; |
|||
|
|||
let signer = aws_sdk_dsql::auth_token::AuthTokenGenerator::new(auth_config); |
|||
|
|||
let now = std::time::Instant::now(); |
|||
|
|||
let auth_token = std::thread::spawn(move || { |
|||
let rt = tokio::runtime::Builder::new_current_thread() |
|||
.enable_all() |
|||
.build()?; |
|||
|
|||
rt.block_on(signer.db_connect_admin_auth_token(sdk_config)) |
|||
}) |
|||
.join() |
|||
.map_err(|e| ConnectionError::BadConnection(format!("Failed to generate DSQL auth token: {e:#?}")))? |
|||
.map_err(|e| ConnectionError::BadConnection(format!("Failed to generate DSQL auth token: {e}")))?; |
|||
|
|||
psql_url.set_scheme("postgresql").expect("Failed to set 'postgresql' as scheme for DSQL connection URL"); |
|||
psql_url.set_path("postgres"); |
|||
psql_url.query_pairs_mut() |
|||
.append_pair("sslmode", "require") |
|||
.append_pair("user", "admin") |
|||
.append_pair("password", auth_token.as_str()); |
|||
|
|||
psql_url_lock_guard.replace(PsqlUrl { timestamp: now, url: psql_url.to_string() }); |
|||
|
|||
Ok(psql_url.to_string()) |
|||
} |
@ -0,0 +1,141 @@ |
|||
use std::{io::{Error, ErrorKind}, path::{Path, PathBuf}, time::SystemTime}; |
|||
|
|||
use rocket::fs::TempFile; |
|||
use tokio::{fs::{File, OpenOptions}, io::{AsyncReadExt, AsyncWriteExt}}; |
|||
|
|||
use super::PersistentFSBackend; |
|||
|
|||
pub(crate) struct LocalFSBackend(String); |
|||
|
|||
impl AsRef<Path> for LocalFSBackend { |
|||
fn as_ref(&self) -> &Path { |
|||
self.0.as_ref() |
|||
} |
|||
} |
|||
|
|||
impl PersistentFSBackend for LocalFSBackend { |
|||
fn new<P: AsRef<Path>>(path: P) -> std::io::Result<Self> { |
|||
Ok(Self(path |
|||
.as_ref() |
|||
.to_str() |
|||
.ok_or_else(|| |
|||
Error::new( |
|||
ErrorKind::InvalidInput, |
|||
"Data folder path {path:?} is not valid UTF-8" |
|||
) |
|||
)? |
|||
.to_string() |
|||
)) |
|||
} |
|||
|
|||
async fn read(self) -> std::io::Result<Vec<u8>> { |
|||
let mut file = File::open(self).await?; |
|||
let mut buffer = Vec::new(); |
|||
file.read_to_end(&mut buffer).await?; |
|||
Ok(buffer) |
|||
} |
|||
|
|||
async fn write(self, buf: &[u8]) -> std::io::Result<()> { |
|||
let mut file = OpenOptions::new().create(true).truncate(true).write(true).open(self).await?; |
|||
file.write_all(buf).await?; |
|||
Ok(()) |
|||
} |
|||
|
|||
async fn path_exists(self) -> std::io::Result<bool> { |
|||
match tokio::fs::metadata(self).await { |
|||
Ok(_) => Ok(true), |
|||
Err(e) => match e.kind() { |
|||
ErrorKind::NotFound => Ok(false), |
|||
_ => Err(e), |
|||
}, |
|||
} |
|||
} |
|||
|
|||
async fn file_exists(self) -> std::io::Result<bool> { |
|||
match tokio::fs::metadata(self).await { |
|||
Ok(metadata) => Ok(metadata.is_file()), |
|||
Err(e) => match e.kind() { |
|||
ErrorKind::NotFound => Ok(false), |
|||
_ => Err(e), |
|||
}, |
|||
} |
|||
} |
|||
|
|||
async fn path_is_dir(self) -> std::io::Result<bool> { |
|||
match tokio::fs::metadata(self).await { |
|||
Ok(metadata) => Ok(metadata.is_dir()), |
|||
Err(e) => match e.kind() { |
|||
ErrorKind::NotFound => Ok(false), |
|||
_ => Err(e), |
|||
}, |
|||
} |
|||
} |
|||
|
|||
async fn canonicalize(self) -> std::io::Result<PathBuf> { |
|||
tokio::fs::canonicalize(self).await |
|||
} |
|||
|
|||
async fn create_dir_all(self) -> std::io::Result<()> { |
|||
tokio::fs::create_dir_all(self).await |
|||
} |
|||
|
|||
async fn persist_temp_file(self, mut temp_file: TempFile<'_>) -> std::io::Result<()> { |
|||
if temp_file.persist_to(&self).await.is_err() { |
|||
temp_file.move_copy_to(self).await?; |
|||
} |
|||
|
|||
Ok(()) |
|||
} |
|||
|
|||
async fn remove_file(self) -> std::io::Result<()> { |
|||
tokio::fs::remove_file(self).await |
|||
} |
|||
|
|||
async fn remove_dir_all(self) -> std::io::Result<()> { |
|||
tokio::fs::remove_dir_all(self).await |
|||
} |
|||
|
|||
async fn last_modified(self) -> std::io::Result<SystemTime> { |
|||
tokio::fs::symlink_metadata(self) |
|||
.await? |
|||
.modified() |
|||
} |
|||
|
|||
async fn download_url(self, local_host: &str) -> std::io::Result<String> { |
|||
use std::sync::LazyLock; |
|||
use crate::{ |
|||
auth::{encode_jwt, generate_file_download_claims, generate_send_claims}, |
|||
db::models::{AttachmentId, CipherId, SendId, SendFileId}, |
|||
CONFIG |
|||
}; |
|||
|
|||
let LocalFSBackend(path) = self; |
|||
|
|||
static ATTACHMENTS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{}/", CONFIG.attachments_folder())); |
|||
static SENDS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{}/", CONFIG.sends_folder())); |
|||
|
|||
if path.starts_with(&*ATTACHMENTS_PREFIX) { |
|||
let attachment_parts = path.trim_start_matches(&*ATTACHMENTS_PREFIX).split('/').collect::<Vec<&str>>(); |
|||
|
|||
let [cipher_uuid, attachment_id] = attachment_parts[..] else { |
|||
return Err(Error::new(ErrorKind::InvalidInput, format!("Attachment path {path:?} does not match a known download URL path pattern"))); |
|||
}; |
|||
|
|||
let token = encode_jwt(&generate_file_download_claims(CipherId::from(cipher_uuid.to_string()), AttachmentId(attachment_id.to_string()))); |
|||
|
|||
Ok(format!("{}/attachments/{}/{}?token={}", local_host, cipher_uuid, attachment_id, token)) |
|||
} else if path.starts_with(&*SENDS_PREFIX) { |
|||
let send_parts = path.trim_start_matches(&*SENDS_PREFIX).split('/').collect::<Vec<&str>>(); |
|||
|
|||
let [send_id, file_id] = send_parts[..] else { |
|||
return Err(Error::new(ErrorKind::InvalidInput, format!("Send path {path:?} does not match a known download URL path pattern"))); |
|||
}; |
|||
|
|||
let token = encode_jwt(&generate_send_claims(&SendId::from(send_id.to_string()), &SendFileId::from(file_id.to_string()))); |
|||
|
|||
Ok(format!("{}/api/sends/{}/{}?t={}", local_host, send_id, file_id, token)) |
|||
} else { |
|||
Err(Error::new(ErrorKind::InvalidInput, "Data folder path {path:?} does not match a known download URL path pattern")) |
|||
} |
|||
} |
|||
} |
@ -0,0 +1,316 @@ |
|||
mod local; |
|||
|
|||
#[cfg(s3)] |
|||
mod s3; |
|||
|
|||
use std::{io::{Error, ErrorKind}, path::{Path, PathBuf}, time::SystemTime}; |
|||
|
|||
use rocket::fs::TempFile; |
|||
|
|||
enum FSType { |
|||
Local(local::LocalFSBackend), |
|||
|
|||
#[cfg(s3)] |
|||
S3(s3::S3FSBackend), |
|||
} |
|||
|
|||
pub(crate) trait PersistentFSBackend: Sized { |
|||
fn new<P: AsRef<Path>>(path: P) -> std::io::Result<Self>; |
|||
async fn read(self) -> std::io::Result<Vec<u8>>; |
|||
async fn write(self, buf: &[u8]) -> std::io::Result<()>; |
|||
async fn path_exists(self) -> std::io::Result<bool>; |
|||
async fn file_exists(self) -> std::io::Result<bool>; |
|||
async fn path_is_dir(self) -> std::io::Result<bool>; |
|||
async fn canonicalize(self) -> std::io::Result<PathBuf>; |
|||
async fn create_dir_all(self) -> std::io::Result<()>; |
|||
async fn persist_temp_file(self, temp_file: TempFile<'_>) -> std::io::Result<()>; |
|||
async fn remove_file(self) -> std::io::Result<()>; |
|||
async fn remove_dir_all(self) -> std::io::Result<()>; |
|||
async fn last_modified(self) -> std::io::Result<SystemTime>; |
|||
async fn download_url(self, local_host: &str) -> std::io::Result<String>; |
|||
} |
|||
|
|||
impl PersistentFSBackend for FSType { |
|||
fn new<P: AsRef<Path>>(path: P) -> std::io::Result<Self> { |
|||
#[cfg(s3)] |
|||
if path.as_ref().starts_with("s3://") { |
|||
return Ok(FSType::S3(s3::S3FSBackend::new(path)?)); |
|||
} |
|||
|
|||
Ok(FSType::Local(local::LocalFSBackend::new(path)?)) |
|||
} |
|||
|
|||
async fn read(self) -> std::io::Result<Vec<u8>> { |
|||
match self { |
|||
FSType::Local(fs) => fs.read().await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.read().await, |
|||
} |
|||
} |
|||
|
|||
async fn write(self, buf: &[u8]) -> std::io::Result<()> { |
|||
match self { |
|||
FSType::Local(fs) => fs.write(buf).await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.write(buf).await, |
|||
} |
|||
} |
|||
|
|||
async fn path_exists(self) -> std::io::Result<bool> { |
|||
match self { |
|||
FSType::Local(fs) => fs.path_exists().await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.path_exists().await, |
|||
} |
|||
} |
|||
|
|||
async fn file_exists(self) -> std::io::Result<bool> { |
|||
match self { |
|||
FSType::Local(fs) => fs.file_exists().await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.file_exists().await, |
|||
} |
|||
} |
|||
|
|||
async fn path_is_dir(self) -> std::io::Result<bool> { |
|||
match self { |
|||
FSType::Local(fs) => fs.path_is_dir().await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.path_is_dir().await, |
|||
} |
|||
} |
|||
|
|||
async fn canonicalize(self) -> std::io::Result<PathBuf> { |
|||
match self { |
|||
FSType::Local(fs) => fs.canonicalize().await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.canonicalize().await, |
|||
} |
|||
} |
|||
|
|||
async fn create_dir_all(self) -> std::io::Result<()> { |
|||
match self { |
|||
FSType::Local(fs) => fs.create_dir_all().await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.create_dir_all().await, |
|||
} |
|||
} |
|||
|
|||
async fn persist_temp_file(self, temp_file: TempFile<'_>) -> std::io::Result<()> { |
|||
match self { |
|||
FSType::Local(fs) => fs.persist_temp_file(temp_file).await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.persist_temp_file(temp_file).await, |
|||
} |
|||
} |
|||
|
|||
async fn remove_file(self) -> std::io::Result<()> { |
|||
match self { |
|||
FSType::Local(fs) => fs.remove_file().await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.remove_file().await, |
|||
} |
|||
} |
|||
|
|||
async fn remove_dir_all(self) -> std::io::Result<()> { |
|||
match self { |
|||
FSType::Local(fs) => fs.remove_dir_all().await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.remove_dir_all().await, |
|||
} |
|||
} |
|||
|
|||
async fn last_modified(self) -> std::io::Result<SystemTime> { |
|||
match self { |
|||
FSType::Local(fs) => fs.last_modified().await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.last_modified().await, |
|||
} |
|||
} |
|||
|
|||
async fn download_url(self, local_host: &str) -> std::io::Result<String> { |
|||
match self { |
|||
FSType::Local(fs) => fs.download_url(local_host).await, |
|||
#[cfg(s3)] |
|||
FSType::S3(fs) => fs.download_url(local_host).await, |
|||
} |
|||
} |
|||
} |
|||
|
|||
/// Reads the contents of a file at the given path.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path of the file to read.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<Vec<u8>>` - A result containing a vector of bytes with the
|
|||
/// file contents if successful, or an I/O error.
|
|||
pub(crate) async fn read<P: AsRef<Path>>(path: P) -> std::io::Result<Vec<u8>> { |
|||
FSType::new(path)?.read().await |
|||
} |
|||
|
|||
/// Writes data to a file at the given path.
|
|||
///
|
|||
/// If the file does not exist, it will be created. If it does exist, it will be
|
|||
/// overwritten.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path of the file to write.
|
|||
/// * `buf` - A byte slice containing the data to write to the file.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<()>` - A result indicating success or an I/O error.
|
|||
pub(crate) async fn write<P: AsRef<Path>>(path: P, buf: &[u8]) -> std::io::Result<()> { |
|||
FSType::new(path)?.write(buf).await |
|||
} |
|||
|
|||
/// Checks whether a path exists.
|
|||
///
|
|||
/// This function returns `true` in all cases where the path exists, including
|
|||
/// as a file, directory, or symlink.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path to check.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<bool>` - A result containing a boolean value indicating
|
|||
/// whether the path exists.
|
|||
pub(crate) async fn path_exists<P: AsRef<Path>>(path: P) -> std::io::Result<bool> { |
|||
FSType::new(path)?.path_exists().await |
|||
} |
|||
|
|||
/// Checks whether a regular file exists at the given path.
|
|||
///
|
|||
/// This function returns `false` if the path is a symlink.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path to check.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<bool>` - A result containing a boolean value indicating
|
|||
/// whether a regular file exists at the given path.
|
|||
pub(crate) async fn file_exists<P: AsRef<Path>>(path: P) -> std::io::Result<bool> { |
|||
FSType::new(path)?.file_exists().await |
|||
} |
|||
|
|||
/// Checks whether a directory exists at the given path.
|
|||
///
|
|||
/// This function returns `false` if the path is a symlink.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path to check.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<bool>` - A result containing a boolean value indicating
|
|||
/// whether a directory exists at the given path.
|
|||
pub(crate) async fn path_is_dir<P: AsRef<Path>>(path: P) -> std::io::Result<bool> { |
|||
FSType::new(path)?.path_is_dir().await |
|||
} |
|||
|
|||
/// Canonicalizes the given path.
|
|||
///
|
|||
/// This function resolves the given path to an absolute path, eliminating any
|
|||
/// symbolic links and relative path components.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path to canonicalize.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<PathBuf>` - A result containing the canonicalized path if successful,
|
|||
/// or an I/O error.
|
|||
pub(crate) async fn canonicalize<P: AsRef<Path>>(path: P) -> std::io::Result<PathBuf> { |
|||
FSType::new(path)?.canonicalize().await |
|||
} |
|||
|
|||
/// Creates a directory and all its parent components as needed.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path of the directory to create.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<()>` - A result indicating success or an I/O error.
|
|||
pub(crate) async fn create_dir_all<P: AsRef<Path>>(path: P) -> std::io::Result<()> { |
|||
FSType::new(path)?.create_dir_all().await |
|||
} |
|||
|
|||
/// Persists a temporary file to a permanent location.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `temp_file` - The temporary file to persist.
|
|||
/// * `path` - A reference to the path where the file should be persisted.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<()>` - A result indicating success or an I/O error.
|
|||
pub(crate) async fn persist_temp_file<P: AsRef<Path>>(temp_file: TempFile<'_>, path: P) -> std::io::Result<()> { |
|||
FSType::new(path)?.persist_temp_file(temp_file).await |
|||
} |
|||
|
|||
/// Removes a file at the given path.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path of the file to remove.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<()>` - A result indicating success or an I/O error.
|
|||
pub(crate) async fn remove_file<P: AsRef<Path>>(path: P) -> std::io::Result<()> { |
|||
FSType::new(path)?.remove_file().await |
|||
} |
|||
|
|||
/// Removes a directory and all its contents at the given path.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path of the directory to remove.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<()>` - A result indicating success or an I/O error.
|
|||
pub(crate) async fn remove_dir_all<P: AsRef<Path>>(path: P) -> std::io::Result<()> { |
|||
FSType::new(path)?.remove_dir_all().await |
|||
} |
|||
|
|||
pub(crate) async fn file_is_expired<P: AsRef<Path>>(path: P, ttl: u64) -> Result<bool, Error> { |
|||
let path = path.as_ref(); |
|||
|
|||
let modified = FSType::new(path)?.last_modified().await?; |
|||
|
|||
let age = SystemTime::now().duration_since(modified) |
|||
.map_err(|e| Error::new( |
|||
ErrorKind::InvalidData, |
|||
format!("Failed to determine file age for {path:?} from last modified timestamp '{modified:#?}': {e:?}" |
|||
)))?; |
|||
|
|||
Ok(ttl > 0 && ttl <= age.as_secs()) |
|||
} |
|||
|
|||
/// Generates a pre-signed url to download attachment and send files.
|
|||
///
|
|||
/// # Arguments
|
|||
///
|
|||
/// * `path` - A reference to the path of the file to read.
|
|||
/// * `local_host` - This API server host.
|
|||
///
|
|||
/// # Returns
|
|||
///
|
|||
/// * `std::io::Result<String>` - A result containing the url if successful, or an I/O error.
|
|||
pub(crate) async fn download_url<P: AsRef<Path>>(path: P, local_host: &str) -> std::io::Result<String> { |
|||
FSType::new(path)?.download_url(local_host).await |
|||
} |
@ -0,0 +1,316 @@ |
|||
use std::{io::{Error, ErrorKind}, path::{Path, PathBuf}, time::SystemTime}; |
|||
|
|||
use aws_sdk_s3::{client::Client, primitives::ByteStream, types::StorageClass::IntelligentTiering}; |
|||
use rocket::{fs::TempFile, http::ContentType}; |
|||
use tokio::{fs::File, io::AsyncReadExt}; |
|||
use url::Url; |
|||
|
|||
use crate::aws::aws_sdk_config; |
|||
|
|||
use super::PersistentFSBackend; |
|||
|
|||
pub(crate) struct S3FSBackend { |
|||
path: PathBuf, |
|||
bucket: String, |
|||
key: String, |
|||
} |
|||
|
|||
fn s3_client() -> std::io::Result<Client> { |
|||
static AWS_S3_CLIENT: std::sync::LazyLock<std::io::Result<Client>> = std::sync::LazyLock::new(|| { |
|||
Ok(Client::new(aws_sdk_config()?)) |
|||
}); |
|||
|
|||
(*AWS_S3_CLIENT) |
|||
.as_ref() |
|||
.map(|client| client.clone()) |
|||
.map_err(|e| match e.get_ref() { |
|||
Some(inner) => Error::new(e.kind(), inner), |
|||
None => Error::from(e.kind()), |
|||
}) |
|||
} |
|||
|
|||
impl PersistentFSBackend for S3FSBackend { |
|||
fn new<P: AsRef<Path>>(path: P) -> std::io::Result<Self> { |
|||
let path = path.as_ref(); |
|||
|
|||
let url = Url::parse(path.to_str().ok_or_else(|| Error::new(ErrorKind::InvalidInput, "Invalid path"))?) |
|||
.map_err(|e| Error::new(ErrorKind::InvalidInput, format!("Invalid data folder S3 URL {path:?}: {e}")))?; |
|||
|
|||
let bucket = url.host_str() |
|||
.ok_or_else(|| Error::new(ErrorKind::InvalidInput, format!("Missing Bucket name in data folder S3 URL {path:?}")))? |
|||
.to_string(); |
|||
|
|||
let key = url.path().trim_start_matches('/').to_string(); |
|||
|
|||
Ok(S3FSBackend { |
|||
path: path.to_path_buf(), |
|||
bucket, |
|||
key, |
|||
}) |
|||
} |
|||
|
|||
async fn read(self) -> std::io::Result<Vec<u8>> { |
|||
let S3FSBackend { path, key, bucket } = self; |
|||
|
|||
let result = s3_client()? |
|||
.get_object() |
|||
.bucket(bucket) |
|||
.key(key) |
|||
.send() |
|||
.await; |
|||
|
|||
match result { |
|||
Ok(response) => { |
|||
let mut buffer = Vec::new(); |
|||
response.body.into_async_read().read_to_end(&mut buffer).await?; |
|||
Ok(buffer) |
|||
} |
|||
Err(e) => { |
|||
if let Some(service_error) = e.as_service_error() { |
|||
if service_error.is_no_such_key() { |
|||
Err(Error::new(ErrorKind::NotFound, format!("Data folder S3 object {path:?} not found"))) |
|||
} else { |
|||
Err(Error::other(format!("Failed to request data folder S3 object {path:?}: {e:?}"))) |
|||
} |
|||
} else { |
|||
Err(Error::other(format!("Failed to request data folder S3 object {path:?}: {e:?}"))) |
|||
} |
|||
} |
|||
} |
|||
} |
|||
|
|||
async fn write(self, buf: &[u8]) -> std::io::Result<()> { |
|||
let S3FSBackend { path, key, bucket } = self; |
|||
|
|||
let content_type = Path::new(&key) |
|||
.extension() |
|||
.and_then(|ext| ext.to_str()) |
|||
.and_then(|ext| ContentType::from_extension(ext)) |
|||
.and_then(|t| Some(t.to_string())); |
|||
|
|||
s3_client()? |
|||
.put_object() |
|||
.bucket(bucket) |
|||
.set_content_type(content_type) |
|||
.key(key) |
|||
.storage_class(IntelligentTiering) |
|||
.body(ByteStream::from(buf.to_vec())) |
|||
.send() |
|||
.await |
|||
.map_err(|e| Error::other(format!("Failed to write to data folder S3 object {path:?}: {e:?}")))?; |
|||
|
|||
Ok(()) |
|||
} |
|||
|
|||
async fn path_exists(self) -> std::io::Result<bool> { |
|||
Ok(true) |
|||
} |
|||
|
|||
async fn file_exists(self) -> std::io::Result<bool> { |
|||
let S3FSBackend { path, key, bucket } = self; |
|||
|
|||
match s3_client()? |
|||
.head_object() |
|||
.bucket(bucket) |
|||
.key(key) |
|||
.send() |
|||
.await { |
|||
Ok(_) => Ok(true), |
|||
Err(e) => { |
|||
if let Some(service_error) = e.as_service_error() { |
|||
if service_error.is_not_found() { |
|||
Ok(false) |
|||
} else { |
|||
Err(Error::other(format!("Failed to request data folder S3 object {path:?}: {e:?}"))) |
|||
} |
|||
} else { |
|||
Err(Error::other(format!("Failed to request data folder S3 object {path:?}: {e:?}"))) |
|||
} |
|||
} |
|||
} |
|||
} |
|||
|
|||
async fn path_is_dir(self) -> std::io::Result<bool> { |
|||
Ok(true) |
|||
} |
|||
|
|||
async fn canonicalize(self) -> std::io::Result<PathBuf> { |
|||
Ok(self.path) |
|||
} |
|||
|
|||
async fn create_dir_all(self) -> std::io::Result<()> { |
|||
Ok(()) |
|||
} |
|||
|
|||
async fn persist_temp_file(self, temp_file: TempFile<'_>) -> std::io::Result<()> { |
|||
let S3FSBackend { path, key, bucket } = self; |
|||
|
|||
// We want to stream the TempFile directly to S3 without copying it into
|
|||
// another memory buffer. The official AWS SDK makes it easy to stream
|
|||
// from a `tokio::fs::File`, but does not have a reasonable way to stream
|
|||
// from an `impl AsyncBufRead`.
|
|||
//
|
|||
// A TempFile's contents may be saved in memory or on disk. We use the
|
|||
// SDK to stream the file if we can access it on disk, otherwise we fall
|
|||
// back to a second copy in memory.
|
|||
let file = match temp_file.path() { |
|||
Some(path) => File::open(path).await.ok(), |
|||
None => None, |
|||
}; |
|||
|
|||
let byte_stream = match file { |
|||
Some(file) => ByteStream::read_from().file(file).build().await.ok(), |
|||
None => None, |
|||
}; |
|||
|
|||
let byte_stream = match byte_stream { |
|||
Some(byte_stream) => byte_stream, |
|||
None => { |
|||
// TODO: Implement a mechanism to stream the file directly to S3
|
|||
// without buffering it again in memory. This would require
|
|||
// chunking it into a multi-part upload. See example here:
|
|||
// https://imfeld.dev/writing/rust_s3_streaming_upload
|
|||
let mut read_stream = temp_file.open().await?; |
|||
let mut buf = Vec::with_capacity(temp_file.len() as usize); |
|||
read_stream.read_to_end(&mut buf).await?; |
|||
ByteStream::from(buf) |
|||
} |
|||
}; |
|||
|
|||
let content_type = temp_file |
|||
.content_type() |
|||
.map(|t| t.to_string()) |
|||
.or_else(|| |
|||
temp_file.name() |
|||
.and_then(|name| Path::new(name).extension()) |
|||
.and_then(|ext| ext.to_str()) |
|||
.and_then(|ext| ContentType::from_extension(ext)) |
|||
.and_then(|t| Some(t.to_string())) |
|||
); |
|||
|
|||
s3_client()? |
|||
.put_object() |
|||
.bucket(bucket) |
|||
.key(key) |
|||
.storage_class(IntelligentTiering) |
|||
.set_content_type(content_type) |
|||
.body(byte_stream) |
|||
.send() |
|||
.await |
|||
.map_err(|e| Error::other(format!("Failed to write to data folder S3 object {path:?}: {e:?}")))?; |
|||
|
|||
Ok(()) |
|||
} |
|||
|
|||
async fn remove_file(self) -> std::io::Result<()> { |
|||
let S3FSBackend { path, key, bucket } = self; |
|||
|
|||
s3_client()? |
|||
.delete_object() |
|||
.bucket(bucket) |
|||
.key(key) |
|||
.send() |
|||
.await |
|||
.map_err(|e| Error::other(format!("Failed to delete data folder S3 object {path:?}: {e:?}")))?; |
|||
|
|||
Ok(()) |
|||
} |
|||
|
|||
async fn remove_dir_all(self) -> std::io::Result<()> { |
|||
use aws_sdk_s3::types::{Delete, ObjectIdentifier}; |
|||
|
|||
let S3FSBackend { path, key: prefix, bucket } = self; |
|||
|
|||
let s3_client = s3_client()?; |
|||
|
|||
let mut list_response = s3_client |
|||
.list_objects_v2() |
|||
.bucket(bucket.clone()) |
|||
.prefix(format!("{prefix}/")) |
|||
.into_paginator() |
|||
.send(); |
|||
|
|||
while let Some(list_result) = list_response.next().await { |
|||
let list_result = list_result |
|||
.map_err(|e| Error::other(format!("Failed to list data folder S3 objects with prefix {path:?}/ intended for deletion: {e:?}")))?; |
|||
|
|||
let objects = list_result |
|||
.contents |
|||
.ok_or_else(|| Error::other(format!("Failed to list data folder S3 objects with prefix {path:?}/ intended for deletion: Missing contents")))?; |
|||
|
|||
let keys = objects.into_iter() |
|||
.map(|object| object.key |
|||
.ok_or_else(|| Error::other(format!("Failed to list data folder S3 objects with prefix {path:?}/ intended for deletion: An object is missing its key"))) |
|||
) |
|||
.collect::<std::io::Result<Vec<_>>>()?; |
|||
|
|||
let mut delete = Delete::builder().quiet(true); |
|||
|
|||
for key in keys { |
|||
delete = delete.objects( |
|||
ObjectIdentifier::builder() |
|||
.key(key) |
|||
.build() |
|||
.map_err(|e| Error::other(format!("Failed to delete data folder S3 objects with prefix {path:?}/: {e:?}")))? |
|||
); |
|||
} |
|||
|
|||
let delete = delete |
|||
.build() |
|||
.map_err(|e| Error::other(format!("Failed to delete data folder S3 objects with prefix {path:?}/: {e:?}")))?; |
|||
|
|||
s3_client |
|||
.delete_objects() |
|||
.bucket(bucket.clone()) |
|||
.delete(delete) |
|||
.send() |
|||
.await |
|||
.map_err(|e| Error::other(format!("Failed to delete data folder S3 objects with prefix {path:?}/: {e:?}")))?; |
|||
} |
|||
|
|||
Ok(()) |
|||
} |
|||
|
|||
async fn last_modified(self) -> std::io::Result<SystemTime> { |
|||
let S3FSBackend { path, key, bucket } = self; |
|||
|
|||
let response = s3_client()? |
|||
.head_object() |
|||
.bucket(bucket) |
|||
.key(key) |
|||
.send() |
|||
.await |
|||
.map_err(|e| match e.as_service_error() { |
|||
Some(service_error) if service_error.is_not_found() => Error::new(ErrorKind::NotFound, format!("Failed to get metadata for data folder S3 object {path:?}: Object does not exist")), |
|||
Some(service_error) => Error::other(format!("Failed to get metadata for data folder S3 object {path:?}: {service_error:?}")), |
|||
None => Error::other(format!("Failed to get metadata for data folder S3 object {path:?}: {e:?}")), |
|||
})?; |
|||
|
|||
let last_modified = response.last_modified |
|||
.ok_or_else(|| Error::new(ErrorKind::NotFound, format!("Failed to get metadata for data folder S3 object {path:?}: Missing last modified data")))?; |
|||
|
|||
SystemTime::try_from(last_modified) |
|||
.map_err(|e| Error::new(ErrorKind::InvalidData, format!("Failed to parse last modified date for data folder S3 object {path:?}: {e:?}"))) |
|||
} |
|||
|
|||
async fn download_url(self, _local_host: &str) -> std::io::Result<String> { |
|||
use std::time::Duration; |
|||
use aws_sdk_s3::presigning::PresigningConfig; |
|||
|
|||
let S3FSBackend { path, key, bucket } = self; |
|||
|
|||
s3_client()? |
|||
.get_object() |
|||
.bucket(bucket) |
|||
.key(key) |
|||
.presigned( |
|||
PresigningConfig::expires_in(Duration::from_secs(5 * 60)) |
|||
.map_err(|e| Error::other( |
|||
format!("Failed to generate presigned config for GetObject URL for data folder S3 object {path:?}: {e:?}") |
|||
))? |
|||
) |
|||
.await |
|||
.map(|presigned| presigned.uri().to_string()) |
|||
.map_err(|e| Error::other(format!("Failed to generate presigned URL for GetObject for data folder S3 object {path:?}: {e:?}"))) |
|||
} |
|||
} |
Loading…
Reference in new issue