4.2 KiB
NIMF Milestone 2
Overview
This document discusses the authentication and authorization between Metal and Fabric focussed on the customer's experience. We want to deliver a seamless user experience that allows users to set up connections directly from Metal to any of the Cloud Service Providers(CSPs) they leverage.
Authentication
Metal
There are a number of ways to authenticate to Metal, but ultimately it comes down to the mode that the customer wishes to use to access their resources. The main methods are directly as a user signed in to a web portal and directly against the API.
Portal access is done by having the OAuth flow which lets the browser obtain a JWT that can be used to authenticate against the Metal APIs. It's important to understand that the Portal doesn't make calls as itself on behalf of the user, but the user themselves are making the calls by way of their browser.
Direct API access is done either through static API keys issued to a user, or a project. Integrations through tooling or libraries built for the language are also provided.
Fabric
Authorization
Metal
Fabric
Option 4 - Asynchronous Events
Highlights:
- Fabric no longer makes direct calls to Metal, it only announces that the connection is ready
- Messages are authenticated with JWT
- Metal consumes the events and modifies the state of resources as a controller
Option 5 - Callback/Webhook
Highlights
Similar to Option 4, though the infrastructure is provided by Metal
Fabric instead emits a similarly shaped event that says connections state have changed
It’s Metal’s responsibiity to consume that and respond accordingly
Changes Required
Fabric sends updates to this webhook URL
Metal consumes messages on that URL and handles them accordingly
Metal provides way to see current and desired state
Advantages
Disadvantages
Documents
Equinix Interconnections
Metal provided interconnections early on to give customers access to the network capabilities provided by Fabric and Network Edge.
There currently two basic types of interconnections, a dedicated interconnection and a shared one. The dedicated version as it sounds uses dedicated port infrastructure that the customer owns. This is often cost prohibitive so interconnections over Equinix owned shared infrastructure fills that space.
The dedicated interconnection types have relatively simple logic in the API relative to shared interconnections. A dedicated interconnection gives you a layer 2 connection and that's all, the rest is on the customer to manage.
Shared connections connect metal to other networks either through layer 2 or layer 3.
Layer 2 interconnections are created using either the
VlanFabricVCCreateInput or the SharedPortVCVlanCreateInput. The
former provides the interconnection using service tokens, used by
Metal to poll the status of the interconnections. These allowed us to
provide customers with connectivity, but a poor experience because if
you look at the connection in Fabric, it's not clear how it relates to
Metal resources.
The SharedPortVCVlanCreateInput allows Fabric access to the related
network resources on the Metal side which means managing these network
resources on Fabric is a little bit easier. This type of
interconnection did some groundwork to bring our physical and logical
networks between Metal and Fabric closer together, but that's mostly
invisible to the customer, but enables us to build products on our
network infrastructure that weren't previously possible.
Currently, both methods of creating these interconnections exist,
until we can deprecate the VlanFabricVCCreateInput. The
SharedPortVCVlanCreateInput type is only capable of layer 2
interconnections to Amazon Web Services. This new input type allows
fabric to start supporting more layer 2 connectivity without requiring
any work on the Metal side. Once we reach parity with the connection
destinations of VlanFabricVCCreateInput we can deprecate this input
type.
Layer 3 interconnections are created by passing the
VrfFabricVCCreateInput to the interconnections endpoint. These
isolate customer traffic by routing table instead of through VLAN
tags.