Obtain an OCP OAuth token for an SSO IdP with Kerberos support
Project description
Non-interactive CLI login for OpenShift clusters via OpenID-connected IdP with Kerberos support
This package can be found
- on Copr as @cki/ocp-sso-token
- on PyPI as ocp-sso-token
The Copr packages are provided for all active Fedora releases, CentOS Stream/EPEL Next 9 and RHEL/EPEL 9.
The code is tested on the supported Python versions 3.8, 3.9, 3.10, 3.11, 3.12 and 3.13.
Quickstart
# from Copr...
sudo dnf copr enable @cki/ocp-sso-token
sudo dnf install ocp-sso-token
# ...or from PyPI
pip install ocp-sso-token
# get a Kerberos token
kinit USER@DOMAIN.COM
# create a kubectl context with a temporary token via SSO
ocp-sso-token https://API.CLUSTER:6443 --context CONTEXT --namespace NAMESPACE
# and try it out!
oc --context CONTEXT get pod
Problem: several manual steps to log into an OpenShift cluster via OIDC without ROPC
To log into an OpenShift cluster on the command line, oc login supports user/password authentication for various identity providers like LDAP or OIDC with ROPC grant flow.
If no provider with password support is configured, the user is referred to the OAuth login page to obtain a temporary token interactively. After selecting the right provider, the user is forwarded to authenticate with the SSO provider, and redirected back to the cluster afterwards. Another click reveals the temporary token that can now be used for the CLI tools.
For an OpenID provider that supports Kerberos tickets, the authentication with the SSO provider happens transparently. For such setups, logging into a cluster via the CLI roughly requires the following steps:
- run
oc login
and click on the link, or visit a bookmark for the cluster login page - click on the button for the OpenID provider
- watch the webpages forwarding to each other
- click on the link to reveal the temporary token
- use the shown temporary token/oc login command to log into the cluster
These steps must be performed daily and per cluster.
Approach: automate all the steps above
The Python script in this repository automates all the steps to obtain the temporary token so that the following is possible:
kinit $user@$domain
# either save the token directly in the specified context in ~/.kube/config...
ocp-sso-token $server --context $context --namespace $namespace
oc --context $context get pod
# ...or use the token with oc login
oc login --server $server --token $(ocp-sso-token $server)
oc get pod
Installing the script
# from Copr
sudo dnf copr enable @cki/ocp-sso-token
sudo dnf install ocp-sso-token
# from PyPI
pip install ocp-sso-token
# from source
pip install --user git+https://gitlab.com/cki-project/ocp-sso-token
Using the script to log into an OpenShift cluster via OIDC
usage: ocp-sso-token [-h] [--identity-providers IDENTITY_PROVIDERS]
[--context CONTEXT] [--namespace NAMESPACE]
api_url
Obtain an OCP OAuth token for a Kerberos ticket
positional arguments:
api_url Cluster API URL like https://api.cluster:6443
optional arguments:
-h, --help show this help message and exit
--identity-providers IDENTITY_PROVIDERS
Identity provider names (default: SSO,OpenID)
--context CONTEXT Instead of printing the token, store it in the given context (default:
None)
--namespace NAMESPACE
Namespace to use for --context (default: None)
If your identity provider name is not included in the defaults shown above, add
it via --identity-providers
. The first matching identity provider will be
used.
With --context
, the token is directly added to the Kubernetes configuration file
(~/.kube/config
or KUBECONFIG). Otherwise, the token is printed to the console.
Running a smoke test
kinit user@DOMAIN.COM
server=https://api.cluster:6443; oc --server $server --token $(ocp-sso-token $server) get project
Logging into clusters with custom context names
With --context
and --namespace
, the obtained tokens are directly configured
in the specified contexts:
$ kinit user@DOMAIN.COM
$ ocp-sso-token https://api.cluster1:6443 --context cluster1 --namespace project1
$ ocp-sso-token https://api.cluster2:6443 --context cluster2 --namespace project2
$ oc config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
cluster1 api-cluster1:6443 api-cluster1:6443 project1
cluster2 api-cluster2:6443 api-cluster2:6443 project2
$ oc --context cluster1 get pod
$ oc --context cluster2 get pod
Logging into clusters via oc login
Without --context
, the obtained tokens are printed to the console and can be
used with oc login
. Logging into clusters this way creates context names
automatically. This is most useful with a single cluster, but works with
multiple clusters as well:
$ kinit user@DOMAIN.COM
$ server=https://api.cluster1:6443; oc login --server $server --token $(ocp-sso-token $server)
$ server=https://api.cluster2:6443; oc login --server $server --token $(ocp-sso-token $server)
$ oc config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
default/api-cluster1:6443/user@domain.com api-cluster1:6443 user@domain.com/api-cluster1:6443 default
* default/api-cluster2:6443/user@domain.com api-cluster2:6443 user@domain.com/api-cluster2:6443 default
$ oc --namespace project2 get pod
$ oc --context default/api-cluster1:6443/user@domain.com --namespace project1 get pod
$ oc --context default/api-cluster2:6443/user@domain.com --namespace project2 get pod
ssl.SSLCertVerificationError
If you get an error similar to the following, most likely the intranet certificates are not used by the requests module as the certifi module ships its own certificate store:
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)
On Fedora/Debian-based distributions, this can be solved by using the certifi
module from the distribution package repositories, e.g. via
pip uninstall certifi # if certifi was already installed via pip
dnf install python3-certifi # Fedora-based distributions
apt install python3-certifi # Debian-based distributions
It is also possible to force the use of a specific certificate bundle via something like
export REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
Creating a development setup and running the tests
Installing development dependencies:
pip install -e .[dev]
Running linting/tests:
tox
Creating a release
-
Create a release MR with an update of the version number in
ocp_sso_token/__init__.py
, e.g to '3.1.4' -
Create an annotated tag with the same version prefixed with
v
and enter the release notes as the tag message, e.g.git tag v3.1.4 -a
For the release notes, list the important changes and include the merge requests that introduced them, e.g.
- Run tests in parallel on multiple Python versions (!8)
-
Push the tag to GitLab, e.g.
git push origin v3.1.4
-
Wait for the tag pipeline to finish
-
Check the resulting GitLab and PyPI releases
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file ocp_sso_token-1.3.1.tar.gz
.
File metadata
- Download URL: ocp_sso_token-1.3.1.tar.gz
- Upload date:
- Size: 21.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 583a85c1cbd59a33e35a639f2348480255894dcd50cf878277b231a0820340db |
|
MD5 | 35d0e2238b76cafaba23d32c68ad505a |
|
BLAKE2b-256 | 4d5355a56ef5979dcc6e8e68b7a008ff01440a8b32af3e25f1f4abe640026e1b |
File details
Details for the file ocp_sso_token-1.3.1-py3-none-any.whl
.
File metadata
- Download URL: ocp_sso_token-1.3.1-py3-none-any.whl
- Upload date:
- Size: 20.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b47f68d23171b39bf8168a978939b221dce05ce6ba66182a939a54ee71ff8c89 |
|
MD5 | c60680e4b3bab81bf72be5185e6b608e |
|
BLAKE2b-256 | 17296ae3bd90e340907c016ba97a2b0dab89659f7585682e40fc82760aa91dd1 |