Copyright (c) 2010-2012 OpenStack, LLC
An Auth Service for Swift as WSGI Middleware that uses Swift itself as a backing store. Docs at: https://swauth.readthedocs.io/ or ask in #openstack-swauth on freenode IRC (archive: http://eavesdrop.openstack.org/irclogs/%23openstack-swauth/).
Source available at: https://github.com/openstack/swauth
See also https://github.com/openstack/keystone for the standard OpenStack auth service.
Before discussing how to install Swauth within a Swift system, it might help to understand how Swauth does it work first.
Install Swauth with sudo python setup.py install or sudo python setup.py develop or via whatever packaging system you may be using.
Alter your proxy-server.conf pipeline to have swauth instead of tempauth:
Was:
[pipeline:main] pipeline = catch_errors cache tempauth proxy-serverChange To:
[pipeline:main] pipeline = catch_errors cache swauth proxy-server
Add to your proxy-server.conf the section for the Swauth WSGI filter:
[filter:swauth]
use = egg:swauth#swauth
set log_name = swauth
super_admin_key = swauthkey
default_swift_cluster = <your setting as discussed below>
The default_swift_cluster setting can be confusing.
- If you’re using an all-in-one type configuration where everything will be run on the local host on port 8080, you can omit the default_swift_cluster completely and it will default to local#http://127.0.0.1:8080/v1.
- If you’re using a single Swift proxy you can just set the default_swift_cluster = cluster_name#https://<public_ip>:<port>/v1 and that URL will be given to users as well as used by Swauth internally. (Quick note: be sure the http vs. https is set right depending on if you’re using SSL.)
- If you’re using multiple Swift proxies behind a load balancer, you’ll probably want default_swift_cluster = cluster_name#https://<load_balancer_ip>:<port>/v1#http://127.0.0.1:<port>/v1 so that Swauth gives out the first URL but uses the second URL internally. Remember to double-check the http vs. https settings for each of the URLs; they might be different if you’re terminating SSL at the load balancer.
Also see the proxy-server.conf-sample for more config options, such as the ability to have a remote Swauth in a multiple Swift cluster configuration.
Be sure your Swift proxy allows account management in the proxy-server.conf:
[app:proxy-server]
...
allow_account_management = true
For greater security, you can leave this off any public proxies and just have one or two private proxies with it turned on.
Restart your proxy server swift-init proxy reload
Initialize the Swauth backing store in Swift swauth-prep -K swauthkey
Add an account/user swauth-add-user -A http[s]://<host>:<port>/auth/ -K swauthkey -a test tester testing
Ensure it works swift -A http[s]://<host>:<port>/auth/v1.0 -U test:tester -K testing stat -v
If anything goes wrong, it’s best to start checking the proxy server logs. The client command line utilities often don’t get enough information to help. I will often just tail -F the appropriate proxy log (/var/log/syslog or however you have it configured) and then run the Swauth command to see exactly what requests are happening to try to determine where things fail.
General note, I find I occasionally just forget to reload the proxies after a config change; so that’s the first thing you might try. Or, if you suspect the proxies aren’t reloading properly, you might try swift-init proxy stop, ensure all the processes died, then swift-init proxy start.
Also, it’s quite common to get the /auth/v1.0 vs. just /auth/ URL paths confused. Usual rule is: Swauth tools use just /auth/ and Swift tools use /auth/v1.0.
Swift3 middleware support has to be explicitly turned on in conf file using s3_support config option. It can easily be used with swauth when auth_type in swauth is configured to be Plaintext (default):
[pipeline:main]
pipeline = catch_errors cache swift3 swauth proxy-server
[filter:swauth]
use = egg:swauth#swauth
super_admin_key = swauthkey
s3_support = on
The AWS S3 client uses password in plaintext to compute HMAC signature When auth_type in swauth is configured to be Sha1 or Sha512, swauth can only use the stored hashed password to compute HMAC signature. This results in signature mismatch although the user credentials are correct.
When auth_type is not Plaintext, the only way for S3 clients to authenticate is by giving SHA1/SHA512 of password as input to it’s HMAC function. In this case, the S3 clients will have to know auth_type and auth_type_salt beforehand. Here is a sample configuration:
[pipeline:main]
pipeline = catch_errors cache swift3 swauth proxy-server
[filter:swauth]
use = egg:swauth#swauth
super_admin_key = swauthkey
s3_support = on
auth_type = Sha512
auth_type_salt = mysalt
Security Concern: Swauth stores user information (username, password hash, salt etc) as objects in the Swift cluster. If these backend objects which contain password hashes gets stolen, the intruder will be able to authenticate using the hash directly when S3 API is used.